01.30.14

Fedora Upgrade: F19 -> F20

Posted in Uncategorized at 10:03:36 by Jeff Hardy

First ensuring I was running fedup-0.8 to avoid the fedup-0.7 upgrade bug, this Fedora upgrade was off to the races with fedup --network 20. Unfortunately, after rebooting to the upgrade, my machine ended up wedged at the following:

systemd[1]: Failed to initialize SELinux context: No such file or directory

This fedup selinux bug is easily worked around with selinux=0 enforcing=0 on the kernel command-line. Afterwards, the upgrade completed without issue. I am also pleased to see that one particular Pidgin bug that plagued me throughout Fedora 19 seems to be fixed.

Meanwhile, an unrelated fedup upgrade I conducted on a laptop required that I work around some bugs related to finding my encrypted root partition, by specifying rd.luks.uuid=<uuid> in grub. I find myself wondering if we as an industry are making adequate progress reconciling and mitigating the increasing problems that come with increasing technical complexity.

12.19.13

SUNY Federation with SimpleSAMLphp in HA

Posted in Technology at 22:16:49 by Jeff Hardy

SimpleSAMLphp is the ideal choice as a SAML 2.0 IdP for SUNY Federation. As it will be the core authentication mechanism for SUNY services, high-availability and load-balancing are an important consideration. Here is one approach to achieving this with HAProxy and Keepalived.

We intend to have a pair (at least) of Apache servers running identical SimpleSAMLphp installations. These will sit behind a pair of load-balancer machines running HAProxy and Keepalived, the first to balance HTTP traffic to the SAML servers, the second to provide virtual IP failover between the balancers. This forward-facing virtual IP will be stunnel-encryped, and advertised as our SAML Identity Provider (IdP).

__________vip__________
______keepalived______
balance1____balance2
|____\__haproxy__/____|
saml1___________saml2

Senior sysadmin Greg Kuchyt pioneered our use of these technologies for load-balancing and failover of our 389-DS LDAP services in a manner much like this, an approach that has served us well for many years, and which we have used for other services.

Our SAML setup is based on the document collection in the SUNY IDM Wiki, particularly the excellent Installing SimpleSAMLphp for SUNY Federation guide. Suffice it to say that I will not go into any detail on our IdP setup, instead covering only what SimpleSAMLphp changes are necessary to share session data for load-balancing and failover.

Shared Session Data with Memcache

On the SAML servers, in order to pave the way for balancing load across multiple machines, we need to ensure that SimpleSAMLphp session handling is switched over to memcache:

yum install memcached php-pecl-memcache
service memcached start
chkconfig memcached on

Note that SimpleSAMLphp uses PHP memcache and not the newer PHP memcached, but both use the system memcached daemon.

Firewalls will need to be opened to allow incoming memcache connections on port 11211.

In the SimpleSAMLphp configuration, set the store.type, and then adjust the memcache_store.servers array to taste (see docs). The following will cause all session data to be saved to the memcache servers on both boxes:

config.php:

'store.type' => 'memcache',
'memcache_store.servers' => array(
        array(
                array('hostname' => 'saml1.potsdam.edu'),
        ),
        array(
                array('hostname' => 'saml2.potsdam.edu'),
        ),
),

Load-Balancing with HAProxy

On the balance servers we first install HAProxy. The following configuration uses the least connection algorithm to balance HTTP sessions between the two SAML servers behind them.

/etc/haproxy/haproxy.cfg:

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

listen https
    balance leastconn
    bind    127.0.0.1:4430
    option  httpclose
    option  forwardfor
    option  httplog
    option  httpchk         GET /
    server  saml1 saml1.potsdam.edu:80 check inter 5000 downinter 500
    server  saml2 saml2.potsdam.edu:80 check inter 5000 downinter 500

Note that hostnames are used here for clarity, but it is probably preferable to use IP addresses for server config lines. Also note that we bind our instance to localhost:4430, since we do not intend for public connections to hit this port directly. Instead they will hit SSL on port 443 provided by stunnel (later).

Start haproxy:

service haproxy start

We now have two balancers that will each balance HTTP connections to the SAML servers.

Failover with Keepalived

On the balance servers we next install Keepalived. On balance1, the following configuration establishes a virtual IP address to be maintained between the balance servers.

/etc/keepalived/keepalived.conf:

global_defs {
        notification_email {
                devnull@potsdam.edu
        }
        notification_email_from devnull@potsdam.edu
        smtp_server 10.137.110.104
        smtp_connect_timeout 30
}

vrrp_instance VI_1 {
        virtual_router_id 1
        state MASTER
        priority 100
        interface eth2

        smtp_alert

        authentication {
                auth_type AH
                auth_pass SomeKindofPasswordHere!
        }
        virtual_ipaddress {
                10.137.100.101/24 brd 10.137.100.255 dev eth2
        }
}

On balance2, we install the same configuration with two key differences:

        state BACKUP
        priority 50

Start keepalived:

service keepalived start

We now have a front-facing virtual IP address 10.137.100.101 providing failover between the balancers. You can view the status of the IP with: ip addr show.

SSL with Stunnel

On the balance servers we next install stunnel to encrypt communication on the virtual IP port 443.

/etc/stunnel/https.conf:

#CAfile = /etc/pki/tls/certs/entrust-chain.crt
cert = /etc/pki/tls/certs/saml.potsdam.edu.crt
key = /etc/pki/tls/private/saml.potsdam.edu.key
[https]
        # public virtual ip address
        accept = 10.137.100.101:443
        connect = 127.0.0.1:4430
	verify = 0

An example self-signed cert:

openssl genrsa -aes256 -out pass.key 2048
openssl rsa -in pass.key -out server.key
openssl req -new -key server.key -x509 -out server.crt -days 999

Note that key and cert will need to be copied into the locations referenced above.

Start stunnel with the following, probably in an init script:

stunnel /etc/stunnel/https.conf

With stunnel running, we now have encrypted communication on our virtual IP.

Hacking SimpleSAMLphp for Offloaded SSL

SimpleSAMLphp makes use of internal functions to determine port number, HTTP versus HTTPS, and to build links based on those determinations. Because we offloaded SSL to our balancer, it throws off these functions, with the result that links and such are improperly constructed.

Discussion at https://groups.google.com/forum/#!topic/simplesamlphp/m0aqJoURl7I suggests that there may be simpler ways to do this, eventually, but I settled on the changes represented in the following patch to lib/SimpleSAML/Utilities.php:

--- Utilities.dist.php	2013-04-08 04:44:05.000000000 -0400
+++ Utilities.php	2013-12-14 15:35:04.000000000 -0500
@@ -87,6 +87,8 @@
 	 */
 	public static function isHTTPS() {
 
+return TRUE;
+
 		$url = self::getBaseURL();
 
 		$end = strpos($url,'://');
@@ -105,6 +107,8 @@
 	 */
 	private static function getServerHTTPS() {
 
+return TRUE;
+
 		if(!array_key_exists('HTTPS', $_SERVER)) {
 			/* Not an https-request. */
 			return FALSE;
@@ -128,6 +132,8 @@
 	 */
 	private static function getServerPort() {
 
+return '';
+
 		if (isset($_SERVER["SERVER_PORT"])) {
 			$portnumber = $_SERVER["SERVER_PORT"];
 		} else {

After applying the patch, we should find that all clicks everywhere remain based on https://saml.potsdam.edu.

12.04.13

SUNY Printing with LPSKIM

Posted in Technology at 21:48:01 by Jeff Hardy

SUNY Potsdam (and probably all SUNY schools) need to accommodate administrative print jobs from a number of legacy (COBOL) sources from SUNY Central in Albany. These jobs arrive bearing many of the marks of the era of mainframes and dot matrix printers: text-only, and replete with inconvenient line feeds, odd characters, and alignment issues. Historically this has meant ruler measurements, physical printer adjustment, and later, perhaps some arcane PCL magic. Unfortunately, these jobs seemingly cannot be modified at the source to make them more palatable for modern PostScript printers.

Printing at SUNY Potsdam has been backed by CUPS for over ten years, and because it is open-source, with well-documented APIs for extension, we have been able to accommodate a wide variety of printing scenarios by writing our own applications. These have included accounting filters, PDF writers, hardware backends, and a bevy of specialized solutions to handle unique printing needs.

The latest offering is lpskim, so named because it is a text manipulation filter designed to be used as a System V interface script. A SYSV interface script differs from the now traditional PPD-driven document-type filtering inherent to CUPS, instead assuming all filtering itself as in the old LPR days. So, rather than define a series of isolated scripts to handle the needs for each queue, many repeated, some unique, all needing maintenance, lpskim arose as a generic universal filter that is driven by a configuration file. It has grown to handle everything we need to print these jobs to modern printers and beyond.

In our case, we created several SUNY queues (SUNY1-SUNYE) on our local CUPS server, each corresponding to an Albany queue for Potsdam. Each will receive these Albany-sourced jobs and merely re-spool to the queues with which our offices are familiar. This gives us a useful, consistent abstraction layer so that local queue and printer changes need no coordination with remote hands in Albany. It is also where we attach lpskim. The following sets up a queue for use this way:

lpadmin -p SUNYD -v file:/dev/null -i /path/to/lpskim.pl

Note that the path is wherever lpskim was downloaded, as lpadmin simply copies it to /etc/cups/interfaces/SUNYD. Also note that we define the printer URI as file-backed /dev/null since we handle re-spooling to the final queue in lpskim itself. There we have necessary flexibility to change the destination of print jobs based on patterns in the jobs themselves (more below). This is also a way to create mischief by spooling a job to all queues on the print server, but that is left as an exercise for the reader.

We now need to define configuration for SUNYD in /etc/cups/lpskim.conf, otherwise the job will simply hit /dev/null. The following is assumed to be all one line:

SUNYD | debug : save_job : respool=RAY412-5 : lpoption=cpi=12 : \ 
lpoption=lpi=6.6 : lpoption=page-left=54 : lpoption=page-top=54

Note that we are re-spooling the job to an office queue, and that when doing so, we specify several lp options to affect formatting as the job is converted to the output appropriate for the final destination printer as specified in CUPS. Here, these are all margin adjustments, our simplest setup.

I provide our full SUNY lpskim configuration for all queues across a range of HP and Dell PostScript laser printers for reference. This file should be saved at /etc/cups/lpskim.conf. Notes follow.

lpskim.txt

  • SUNY1 (checks): Fairly straightforward text manipulation and formatted re-spooling, except for a cut/replace to move the date for new check stock.
  • SUNY4 (batch control): Formatted re-spooling with conf_switch options to re-base the configuration on either the SUNY4_CHECKREG or SUNY4_QUICKPAY configs based on patterns in the job. With all options the same, these proved unnecessary in the end, but were left in case new printers introduced new wrinkles.
  • SUNY9 (refund): Minute adjustments to cpi and lpi so output would fit a rough photocopy of a form, and line deletions near the end to eliminate a few extraneous lines that unavoidably shift to a second page as a result.
  • SUNYE (vouchers): A pair of filters to change line feeds and the like. Also note the commented-out alternate configuration that prepended a PCL string for a previous printer.

The rest of the queues are straightforward formatted re-spooling, a few involving a change to landscape, some margin adjustments, etc.

See http://fritz.potsdam.edu/projects/cupsapps/lpskim to download lpskim and for full documentation of all options. Happy printing.

09.25.13

Fedora Upgrade: F18 -> F19

Posted in Technology at 03:46:41 by Jeff Hardy

Another fedup upgrade brought me up to Fedora 19 a couple of weeks ago. Pre-upgrade, my outstanding oom-killer bug seemed to get worse before it got better. Since I consider it a last resort to change likely unrelated variables in the face of a problem (entire OS), I stuck it out on Fedora 18 until, luckily, a kernel update arrived that seemed to stabilize the issue (inadvertently or otherwise, 3.10.7). With a known good kernel available in both new and old, it paved the way for upgrade.

Stability has persisted, despite the fact that I am still on my legacy hardware, three graphics cards, four monitors setup. I am experiencing a problem with pidgin, for which I have opened a new bug. Such is technology.

04.08.13

Fedora Upgrade: F17 -> F18

Posted in Technology at 01:50:54 by Jeff Hardy

The new fedup process completed my upgrade to Fedora 18 with no issues a couple of months ago. After the normal minor wrangling to get four monitors working again, I immediately began experiencing a more serious issue: persistent oom-killer, typically a few hours after boot and despite plenty of memory.

Oom-killer continued to target various aspects of X, so trying to rule things out, I tried different desktop environments. Unfortunately, it has persisted across XFCE, LXDE, and KDE. All attempts at isolating a single problematic application have come up empty. A decent resource on tracing memory issues like this is http://bl0rg.krunch.be/oom-frag.html, which analysis points to fragmentation, but in my case I seem to be exhausting DMA memory. I created a bug tracking the issue.

vm.panic_on_oom = 1
kernel.panic = 10

I had to resort to the above sysctl.conf, with further mitigation being to reboot at the beginning and end of the day, leaving the system in runlevel 3 when not in use. Barely tolerable. My least successful upgrade.

01.21.13

Multiplayer Gaming Achievements: From Bomb to Boost

Posted in Gaming at 00:48:05 by Jeff Hardy

…as published in Fourth Coast Entertainment Magazine, Vol 7 Issue 7, 2/13

The world of gaming experienced an important change with the advent of networked extrinsic rewards, commonly known as achievements or trophies. The PC Steam platform and the seventh generation consoles Xbox 360 and Playstation 3 implemented these systems starting in 2007. Though all three differ in various ways, and even go by different names (achievement, trophy) they work very similarly: completing various objectives in games for these systems results in the familiar audio ding of a trophy drop or achievement points earned, and a rewarding little graphic unlocked and attached forever to one’s network profile.

For power gamers, those who seek out 100% completion of the games they play, this is an interesting extra dimension to cope with, often requiring great lengths to finish all aspects of a game. Single-player trophies inevitably boil down first to rewards for progressing through the single-player campaign, and then to arbitrary and capricious rewards for anything beyond that from the mundane to the compulsive to the ridiculous. Multiplayer trophies include all of the above with the addition of one radically unpredictable element: other humans. And this is where a number of difficulties arise in the current systems.

What if the game does not adequately match player skill levels for the various game modes? Any trophies related to winning those modes will seem out of reach. What if some game modes are just not popular? This is a real problem once a game has been out for awhile, since participation drops overall and remaining players coalesce around a handful of the most fun game types. What if a game simply requires a once-in-a-lifetime lucky stunt involving coordination or competition with unwilling or unwitting players? You can play a game for ages and simply never pull it off. And what if multiplayer is simply broken? These online multiplayer challenges are difficult enough without all of the extra obstacles.

Power gamers have employed an interesting strategy to deal with these problems, so-called trophy boosting. Forums dedicated to boosting exist for nearly every game, filled with gamers setting up dates and times to get together to trade race wins or kill streaks, or to coordinate that impossible stunt, or just to work together for the long haul. A significant number of these players have never even attempted to earn these trophies legitimately, seeking only to boost them for trophy gain. It could be argued that this kind of flaccid boosting is a separate subculture from power gaming proper, since many power gamers actively work to experience all aspects of a game. Regardless, it is no surprise that this is frowned upon by many gamers since, depending on the game, multiplayer boosting activity could change the curve as it were, affecting the statistics and standings of other players.

Whether or not you agree with the trends or the methods, if full achievement is a goal, problems with these systems and the balance of multiplayer trophies practically ensure that at least a few online trophies for any given game will need to be boosted. It is an unfortunate situation that has plagued the achievement systems since their inception, and it shows few signs of improving.

The following are some of my own experiences dealing with the difficulties of multiplayer trophies for games on the Playstation 3. I hope to illustrate the wide range of issues one can encounter when seeking these achievements.

Red Faction: Guerilla has a typical multiplayer achievement problem. Try Anything Once, Check Your Map, and Tools of the Trade require that one finish a match on every mode, on every map, and score a kill with every weapon respectively. These were simply unattainable for me without boosting, because most of the game types, and maps, and weapon combinations were no longer played by the time I experienced multiplayer a mere two years after the game’s release. I have a friend with the game, and he volunteered to be a punching bag.

Red Dead: Redemption is a trophy marathon to begin with, and there are more than a few multiplayer trophies that see a lot of boosting. In particular Kingpin requires one to quickly kill a full eight players in an optional game mode. Getting eight players to respond to an optional invite proved impossible, and even if they did, killing all of them within three minutes would have required the planets to align. I ended up recruiting 15 players spanning nearly as many timezones from the ps3trophies.org boosting forum, coordinating a time to meet up and trade dynamite group kills. It was still difficult, but also hilarious. I sincerely doubt very many people have ever earned this legitimately.

Portal 2 offers a uniquely strange trophy. Professor Portal requires that one beats co-op mode, and then completes the tutorial with someone who has never played before. This is essentially a pyramid scheme in trophy land, as eventually a last wave of players will have no new players to escort through training. One approach is to play the tutorial over and over again, hoping that the strangers one plays with are there for the first time. I tried this for awhile with no luck, even though I was playing during the initial surge of popularity shortly after the game’s debut. Luckily I have a friend with the game who allowed me to tag along the first time he played.

Grand Theft Auto IV is consistently ranked among the best Playstation 3 games of all time, and it is perhaps appropriate that it is one of the most difficult games to earn all rewards. In fairness, it was released just as achievements debuted, but three trophies illustrate a lot that is still wrong with the current multiplayer achievement systems.

First there is Auf Weidersehen Petrovic, requiring one to achieve a win in every single multiplayer game mode and map, illustrating the familiar problem of how to deal with modes and maps that are no longer played. Multiplayer participation has remained relatively strong over the years since the game’s release in 2008, but inevitably there are modes that are simply avoided (boat races anyone?). Players typically swap races and wins with someone on the boosting forums. In my case, I happened to meet someone in a co-op game and did the same.

Then there is Fly the Co-op, requiring one to beat three co-op missions in incredibly challenging times. This is a real test of skill, with the added difficulty of finding cooperative, reliable partners from out of the hordes of fools and griefers, with which to master the missions. A lot of players struggle with this trophy, and there is even a “service” consisting of a Facebook group of talented players who will shepherd wayward players through the missions. I did this the hard way, having finally found a good partner who was willing to work at it through dozens and dozens of attempts. The most interesting part is that we did not speak the same language, using online translation tools to coordinate our efforts. Both of us had so much trouble with other players that it was but a small obstacle, and it was very rewarding to finally achieve. In the aftermath of all of this, I have helped other players myself.

Finally there is Wanted, a matter of maxing out to level ten in multiplayer by picking up or earning cash in various game types. This is a rather common achievement for multiplayer games that award some measure of experience, but the problem here is that the only way to earn money is in adversarial multiplayer or the handful of co-op missions, and you need an awful lot of it. Also, since there are only ten levels, that few gradations does not offer a lot of opportunity for in-game rewards, and the game barely awards any anyway. Ultimately, it is an unbelievable slog to reach level ten and the effort is not commensurate with the reward. But since it stands in the way of 100%, plenty of gamers are hard at work seeking out the fastest ways to earn cash, some taking advantage of gaming glitches. One of my favorites is one of the most ridiculous, taking advantage of a certain map, certain settings, a certain location, a respawn glitch, and rubber bands wrapped around the controller for unattended repeated headshots for an hour of deathmatch. Even employing a number of techniques, legitimate and boosting, it is simply off-balance and takes forever.

While all of these experiences illustrate the difficulties inherent to the multiplayer achievement and trophy systems, two other situations show things at their best and worst.

Uncharted 2 and 3 offer a bevy of multiplayer trophies, but merely playing one co-op game and one competitive game is enough to reach platinum. Any trophies beyond that count towards 100%, which is a long way off, but it is a nice compromise.

Mercenaries 2: World in Flames is impossible to platinum, since it requires a handful of multiplayer trophies, and multiplayer stopped functioning barely a year after the game was released. In fact, the problem is so acute that the game will freeze if one is logged into the Playstation Network when loading the game. Some gamers were reportedly able to work around this issue by buying some other shovelware from the same company, the theory being that it updated and corrected some incompatibility or error introduced by some update into the user’s profile. If that is the case, it was probably a simple fix, and it was really poor form to let a game run fallow so soon after release.

With all of the difficulties outlined here, it is obvious that there are problems with these systems that have gone unaddressed. Despite that, striving for full completion can be quite an enjoyable activity, and I have been surprised to find that it has been even more enjoyable for those games that require help from the boosting world. It has been a very interesting and rewarding experience interacting with other players struggling with these same things, and I have seen nothing but the most altruistic behavior from genuinely appreciative gamers. The average multiplayer experience would be greatly improved if conducted this same way.

Perhaps game makers can start to move away from these draconian multiplayer achievement requirements. And perhaps they can learn what it would take to improve multiplayer gaming as a whole from those gamers who strive to overcome these obstacles. Gamers will continue to achieve in the meantime, and invent create ways to do it.

01.04.13

An Uncertain Future for Local Hosted Services

Posted in Technology at 02:32:43 by Jeff Hardy

Computing & Technology Services at SUNY Potsdam, like virtually any IT support shop anywhere, fills a number of roles and provides a wide variety of functions. From direct user support, to administrative programming, physical infrastructure, telecommunications, and hosted network services, not to mention strategic planning for the college across all these areas and beyond. It is my pleasure to work in the Host & Network Services group of CTS. And yet, we face an uncertain future.

The HNS team is responsible for virtually all hosted services at SUNY Potsdam, the datacenters in which they reside, as well as the network (wired and wireless) on which it all runs. Much of what we do is infrastructure, the critical and invisible: the local area network, wireless, Internet 1/2 connectivity, DNS, LDAP directory services, all aspects of the web, email scanning/delivery/storage, datacenter power and cooling, remote and off-site backups, virtualization, network access control, storage engineering, filesystem management and provisioning, various forms of clustering beneath Banner/BearPAWS, email, LDAP, etc. We also provide and manage many of the local network applications for the the campus: printing, file-serving, calendaring, LMS Blackboard/Moodle, Webwork, PACES services, library applications such as Illiad and Webproxy, the RT tracking system, antivirus, VPN, etc, and dozens of other highly specific campus/office-use applications too numerous to name. And of course, an untold number of behind-the-scenes monitoring and management systems developed to administer the environment.

For over ten years, HNS has prided itself on a commitment to open-source and locally-developed software. This focus has fostered innovation and local expertise in numerous disciplines across the enterprise, and provided huge year-over-year cost-savings. We pay almost nothing for operating-system licensing, basing our entire operation on the open-source operating system Linux. We pay nothing for our virtualization infrastructure, making use of open-source tools, and locally developed methods. Our storage paradigm is based on a novel, cost-effective technology that scales infinitely. Time and again we have chosen to innovate to exceed expectation rather than purchase to meet expectation: developing the knowledge necessary to handle and keep pace with the complexities of a system rather than buying expensive black boxes to put on the network, developing code locally for automation and integration with other systems as opposed to approaches that would have cost tens-of-thousands of dollars, building our own solutions and taking advantage of novel concepts for a fraction of the cost of contemporary solutions, and in general, putting a premium on understanding, knowledge, automation, and innovation.

Aside from the raw cost savings from many of the directions we have chosen, this knowledge commitment has allowed us to keep pace with increasing demands using finite resources. Ten years ago: dozens of servers, hundreds of services… 4 staff. Now: hundreds of servers, thousands of services, plus wireless, and voip, vending, HVAC and all manner of things running on the network… 4 staff. Flat budget. Despite this, we have continually coded, innovated, and built our way forward to higher levels of efficiency and achievement, staying ahead of the curve, aiding our staff retention and recruitment efforts, and providing exceptional levels of service to the campus.

Despite being in a successful, stable, position, able to look ahead at new directions and continually improve existing ones, it seems like a watershed moment for HNS. Did we achieve this just in time for obsolescence?

The industry has changed remarkably over the last ten years. Where once the local hosting professionals of the IT support organization were the only option in town, the always-decreasing costs of processing and bandwidth have made remote hosting (or grid, or cloud, or whichever marketing term du jour) an increasingly serious option. The pros and cons of cloud computing have been covered ad nauseum everywhere: widened services, reduced staffing pressure and hardware costs, in trade for loss of control, potential loss of privacy, and new security concerns. In CTS, we have generally not felt a great need to look at outsourcing to the cloud given that we stand to gain relatively little over our current (mostly free) service offerings, and to lose relatively much in privacy, control, certainty.

In addition, over the last year the SUNY Shared Services and Systemness initiatives have come into being. The first, to target some specific campuses for administrative collaboration and re-alignment, in our case merging services with our neighbor SUNY Canton. The second, a SUNY-wide re-evaluation of processes and methods for greater system-wide efficiency and cohesion in all things, not just IT. Though they began somewhat ignominiously about a year ago, the goals are noble and have generally been embraced by the SUNY community. Specific to technology, there is some real direction taking shape on some core ideas around common student information systems, centralized hosting of services, and dis-incentives for not using common applications.

But a system-wide re-visioning to common standards, platforms, and practices will inevitably have a flattening effect: in some aspects, a given campus might gain, and in others it might lose. For instance, a campus struggling to host a service would gain immensely if SUNY decided to offer that service centrally in a standard fashion. But if that campus handled that service with aplomb, highly-customized and tailored to their business practices (as we achieve), it may lose functionality in a central model. In other cases, it may come down to cost. It will be a tough sell if a given directive adheres to centralizing concepts, but is both less functional and more expensive than current campus practices.

For the local hosting team, so far this means looking at dismantling services and possibly lowering the bar to fit it into standard SUNY practice and off-site hosting. There are definitely advantages to be gained for this trade-off, but there has not been much discussion about the effects of this transition on the teams across SUNY that have been providing these services since their inception. In our case, we already see core services (Banner, email, Illiad) targeted for changes that practically remove us from the equation, and could be much more expensive than the local offerings we have honed and perfected over the years, for little or no functional gain. Across SUNY, generalists on the hosting teams often provide behind-the-scenes leadership in technology, and if their function recedes, campuses may find themselves needing to find this leadership elsewhere.

Losing core services from the local datacenter does not bode well for the future of a group like Host & Network Services, where a culture of innovation has led to a great deal of pride and success over the last ten years. In fact, it has been a morale hit to a group that has provided services at a high level for a very long time. It is the nature of this business that when you are doing your job, no one knows you exist. Unfortunately that means we are susceptible to not being noticed when we need to be. We think we have something special to offer to this process, and are trying to work with SUNY to be involved in shaping these central offerings (currently with email). That is something at least, though it may not be enough.

12.08.12

Email Presentation at SUNY ITEC Fall Wizard 2012

Posted in Technology at 12:09:20 by Jeff Hardy

In CTS Host & Network Services, we are responsible for hosting email for the college. At the SUNY ITEC Fall Wizard Conference a few weeks ago, I presented on the solution that has come into being over the last ten years here at SUNY Potsdam. My documentation and the presentation itself are available at http://fritz.potsdam.edu/projects/email. It covers the entire design, its evolution, some of the challenges we have faced, and thoughts on the “cloud.” Ours is a solution based on the combination of locally-developed software and the innovative use of open-source. The only cost is hardware.

11.09.12

Eclipse 4.2.0 Juno

Posted in Technology at 20:32:38 by Jeff Hardy

I had been behind on my Eclipse upgrades when I upgraded to Eclipse 4.2.0 a couple of months ago. More improvements to an already fresh, crisp, interface. Of note, this is the first time I have used the repository packages:

yum install eclipse-platform eclipse-egit eclipse-epic \
eclipse-pydev eclipse-phpeclipse eclipse-wtp-webservices eclipse-shelled

No sign of problems, and much easier than cobbling together all the plugins I need from their various install sources. Very impressed.

08.15.12

Fedora Upgrade: F16 -> F17

Posted in Technology at 22:45:11 by Jeff Hardy

The latest in my upgrade streak to Fedora 17 a couple of weeks ago was relatively painless. Since this Fedora includes the /usr merge, and even the yum upgrade proponents specifically recommend not upgrading with yum for this version, I went ahead and did it the old fashioned way with DVD and anaconda. The only obstacle was an anaconda bug that was easily worked around by commenting out my separate /home mount (and swap) from /etc/fstab. Post-upgrade, a thousand or more updates, and voila a fully-updated Fedora 17.

On to X and my four monitors. Whereas with my upgrade to Fedora 16 I was able to get Xinerama up on the nouveau driver across all three cards/four screens, here my dual-head NV44A Geforce 6200 AGP card would not cooperate. The resolution on the second head would never set right and it was stuck in clone mode. A brief foray with the RPM Fusion kmod-nvidia-173xx driver seemed more trouble than it was worth, so it was back to nouveau with Xinerama off. I do not really miss it, and I was having some performance issues that may have been Xinerama-related in Fedora 16 anyway. Looks good, and the /usr merge was long in coming.

« Previous entries Next Page » Next Page »