SUNY Potsdam (and probably all SUNY schools) need to accommodate administrative print jobs from a number of legacy (COBOL) sources from SUNY Central in Albany. These jobs arrive bearing many of the marks of the era of mainframes and dot matrix printers: text-only, and replete with inconvenient line feeds, odd characters, and alignment issues. Historically this has meant ruler measurements, physical printer adjustment, and later, perhaps some arcane PCL magic. Unfortunately, these jobs seemingly cannot be modified at the source to make them more palatable for modern PostScript printers.
Printing at SUNY Potsdam has been backed by CUPS for over ten years, and because it is open-source, with well-documented APIs for extension, we have been able to accommodate a wide variety of printing scenarios by writing our own applications. These have included accounting filters, PDF writers, hardware backends, and a bevy of specialized solutions to handle unique printing needs.
The latest offering is lpskim, so named because it is a text manipulation filter designed to be used as a System V interface script. A SYSV interface script differs from the now traditional PPD-driven document-type filtering inherent to CUPS, instead assuming all filtering itself as in the old LPR days. So, rather than define a series of isolated scripts to handle the needs for each queue, many repeated, some unique, all needing maintenance, lpskim arose as a generic universal filter that is driven by a configuration file. It has grown to handle everything we need to print these jobs to modern printers and beyond.
In our case, we created several SUNY queues (SUNY1-SUNYE) on our local CUPS server, each corresponding to an Albany queue for Potsdam. Each will receive these Albany-sourced jobs and merely re-spool to the queues with which our offices are familiar. This gives us a useful, consistent abstraction layer so that local queue and printer changes need no coordination with remote hands in Albany. It is also where we attach lpskim. The following sets up a queue for use this way:
lpadmin -p SUNYD -v file:/dev/null -i /path/to/lpskim.pl
Note that the path is wherever lpskim was downloaded, as lpadmin simply copies it to /etc/cups/interfaces/SUNYD. Also note that we define the printer URI as file-backed /dev/null since we handle re-spooling to the final queue in lpskim itself. There we have necessary flexibility to change the destination of print jobs based on patterns in the jobs themselves (more below). This is also a way to create mischief by spooling a job to all queues on the print server, but that is left as an exercise for the reader.
We now need to define configuration for SUNYD in /etc/cups/lpskim.conf, otherwise the job will simply hit /dev/null. The following is assumed to be all one line:
SUNYD | debug : save_job : respool=RAY412-5 : lpoption=cpi=12 : \
lpoption=lpi=6.6 : lpoption=page-left=54 : lpoption=page-top=54
Note that we are re-spooling the job to an office queue, and that when doing so, we specify several lp options to affect formatting as the job is converted to the output appropriate for the final destination printer as specified in CUPS. Here, these are all margin adjustments, our simplest setup.
I provide our full SUNY lpskim configuration for all queues across a range of HP and Dell PostScript laser printers for reference. This file should be saved at /etc/cups/lpskim.conf. Notes follow.
- SUNY1 (checks): Fairly straightforward text manipulation and formatted re-spooling, except for a cut/replace to move the date for new check stock.
- SUNY4 (batch control): Formatted re-spooling with conf_switch options to re-base the configuration on either the SUNY4_CHECKREG or SUNY4_QUICKPAY configs based on patterns in the job. With all options the same, these proved unnecessary in the end, but were left in case new printers introduced new wrinkles.
- SUNY9 (refund): Minute adjustments to cpi and lpi so output would fit a rough photocopy of a form, and line deletions near the end to eliminate a few extraneous lines that unavoidably shift to a second page as a result.
- SUNYE (vouchers): A pair of filters to change line feeds and the like. Also note the commented-out alternate configuration that prepended a PCL string for a previous printer.
The rest of the queues are straightforward formatted re-spooling, a few involving a change to landscape, some margin adjustments, etc.
See http://fritz.potsdam.edu/projects/cupsapps/lpskim to download lpskim and for full documentation of all options. Happy printing.
Another fedup upgrade brought me up to Fedora 19 a couple of weeks ago. Pre-upgrade, my outstanding oom-killer bug seemed to get worse before it got better. Since I consider it a last resort to change likely unrelated variables in the face of a problem (entire OS), I stuck it out on Fedora 18 until, luckily, a kernel update arrived that seemed to stabilize the issue (inadvertently or otherwise, 3.10.7). With a known good kernel available in both new and old, it paved the way for upgrade.
Stability has persisted, despite the fact that I am still on my legacy hardware, three graphics cards, four monitors setup. I am experiencing a problem with pidgin, for which I have opened a new bug. Such is technology.
The new fedup process completed my upgrade to Fedora 18 with no issues a couple of months ago. After the normal minor wrangling to get four monitors working again, I immediately began experiencing a more serious issue: persistent oom-killer, typically a few hours after boot and despite plenty of memory.
Oom-killer continued to target various aspects of X, so trying to rule things out, I tried different desktop environments. Unfortunately, it has persisted across XFCE, LXDE, and KDE. All attempts at isolating a single problematic application have come up empty. A decent resource on tracing memory issues like this is http://bl0rg.krunch.be/oom-frag.html, which analysis points to fragmentation, but in my case I seem to be exhausting DMA memory. I created a bug tracking the issue.
vm.panic_on_oom = 1
kernel.panic = 10
I had to resort to the above sysctl.conf, with further mitigation being to reboot at the beginning and end of the day, leaving the system in runlevel 3 when not in use. Barely tolerable. My least successful upgrade.
The world of gaming experienced an important change with the advent of networked extrinsic rewards, commonly known as achievements or trophies. The PC Steam platform and the seventh generation consoles Xbox 360 and Playstation 3 implemented these systems starting in 2007. Though all three differ in various ways, and even go by different names (achievement, trophy) they work very similarly: completing various objectives in games for these systems results in the familiar audio ding of a trophy drop or achievement points earned, and a rewarding little graphic unlocked and attached forever to one’s network profile.
For power gamers, those who seek out 100% completion of the games they play, this is an interesting extra dimension to cope with, often requiring great lengths to finish all aspects of a game. Single-player trophies inevitably boil down first to rewards for progressing through the single-player campaign, and then to arbitrary and capricious rewards for anything beyond that from the mundane to the compulsive to the ridiculous. Multiplayer trophies include all of the above with the addition of one radically unpredictable element: other humans. And this is where a number of difficulties arise in the current systems.
What if the game does not adequately match player skill levels for the various game modes? Any trophies related to winning those modes will seem out of reach. What if some game modes are just not popular? This is a real problem once a game has been out for awhile, since participation drops overall and remaining players coalesce around a handful of the most fun game types. What if a game simply requires a once-in-a-lifetime lucky stunt involving coordination or competition with unwilling or unwitting players? You can play a game for ages and simply never pull it off. And what if multiplayer is simply broken? These online multiplayer challenges are difficult enough without all of the extra obstacles.
Power gamers have employed an interesting strategy to deal with these problems, so-called trophy boosting. Forums dedicated to boosting exist for nearly every game, filled with gamers setting up dates and times to get together to trade race wins or kill streaks, or to coordinate that impossible stunt, or just to work together for the long haul. A significant number of these players have never even attempted to earn these trophies legitimately, seeking only to boost them for trophy gain. It could be argued that this kind of flaccid boosting is a separate subculture from power gaming proper, since many power gamers actively work to experience all aspects of a game. Regardless, it is no surprise that this is frowned upon by many gamers since, depending on the game, multiplayer boosting activity could change the curve as it were, affecting the statistics and standings of other players.
Whether or not you agree with the trends or the methods, if full achievement is a goal, problems with these systems and the balance of multiplayer trophies practically ensure that at least a few online trophies for any given game will need to be boosted. It is an unfortunate situation that has plagued the achievement systems since their inception, and it shows few signs of improving.
The following are some of my own experiences dealing with the difficulties of multiplayer trophies for games on the Playstation 3. I hope to illustrate the wide range of issues one can encounter when seeking these achievements.
Red Faction: Guerilla has a typical multiplayer achievement problem. Try Anything Once, Check Your Map, and Tools of the Trade require that one finish a match on every mode, on every map, and score a kill with every weapon respectively. These were simply unattainable for me without boosting, because most of the game types, and maps, and weapon combinations were no longer played by the time I experienced multiplayer a mere two years after the game’s release. I have a friend with the game, and he volunteered to be a punching bag.
Red Dead: Redemption is a trophy marathon to begin with, and there are more than a few multiplayer trophies that see a lot of boosting. In particular Kingpin requires one to quickly kill a full eight players in an optional game mode. Getting eight players to respond to an optional invite proved impossible, and even if they did, killing all of them within three minutes would have required the planets to align. I ended up recruiting 15 players spanning nearly as many timezones from the ps3trophies.org boosting forum, coordinating a time to meet up and trade dynamite group kills. It was still difficult, but also hilarious. I sincerely doubt very many people have ever earned this legitimately.
Portal 2 offers a uniquely strange trophy. Professor Portal requires that one beats co-op mode, and then completes the tutorial with someone who has never played before. This is essentially a pyramid scheme in trophy land, as eventually a last wave of players will have no new players to escort through training. One approach is to play the tutorial over and over again, hoping that the strangers one plays with are there for the first time. I tried this for awhile with no luck, even though I was playing during the initial surge of popularity shortly after the game’s debut. Luckily I have a friend with the game who allowed me to tag along the first time he played.
Grand Theft Auto IV is consistently ranked among the best Playstation 3 games of all time, and it is perhaps appropriate that it is one of the most difficult games to earn all rewards. In fairness, it was released just as achievements debuted, but three trophies illustrate a lot that is still wrong with the current multiplayer achievement systems.
First there is Auf Weidersehen Petrovic, requiring one to achieve a win in every single multiplayer game mode and map, illustrating the familiar problem of how to deal with modes and maps that are no longer played. Multiplayer participation has remained relatively strong over the years since the game’s release in 2008, but inevitably there are modes that are simply avoided (boat races anyone?). Players typically swap races and wins with someone on the boosting forums. In my case, I happened to meet someone in a co-op game and did the same.
Then there is Fly the Co-op, requiring one to beat three co-op missions in incredibly challenging times. This is a real test of skill, with the added difficulty of finding cooperative, reliable partners from out of the hordes of fools and griefers, with which to master the missions. A lot of players struggle with this trophy, and there is even a “service” consisting of a Facebook group of talented players who will shepherd wayward players through the missions. I did this the hard way, having finally found a good partner who was willing to work at it through dozens and dozens of attempts. The most interesting part is that we did not speak the same language, using online translation tools to coordinate our efforts. Both of us had so much trouble with other players that it was but a small obstacle, and it was very rewarding to finally achieve. In the aftermath of all of this, I have helped other players myself.
Finally there is Wanted, a matter of maxing out to level ten in multiplayer by picking up or earning cash in various game types. This is a rather common achievement for multiplayer games that award some measure of experience, but the problem here is that the only way to earn money is in adversarial multiplayer or the handful of co-op missions, and you need an awful lot of it. Also, since there are only ten levels, that few gradations does not offer a lot of opportunity for in-game rewards, and the game barely awards any anyway. Ultimately, it is an unbelievable slog to reach level ten and the effort is not commensurate with the reward. But since it stands in the way of 100%, plenty of gamers are hard at work seeking out the fastest ways to earn cash, some taking advantage of gaming glitches. One of my favorites is one of the most ridiculous, taking advantage of a certain map, certain settings, a certain location, a respawn glitch, and rubber bands wrapped around the controller for unattended repeated headshots for an hour of deathmatch. Even employing a number of techniques, legitimate and boosting, it is simply off-balance and takes forever.
While all of these experiences illustrate the difficulties inherent to the multiplayer achievement and trophy systems, two other situations show things at their best and worst.
Uncharted 2 and 3 offer a bevy of multiplayer trophies, but merely playing one co-op game and one competitive game is enough to reach platinum. Any trophies beyond that count towards 100%, which is a long way off, but it is a nice compromise.
Mercenaries 2: World in Flames is impossible to platinum, since it requires a handful of multiplayer trophies, and multiplayer stopped functioning barely a year after the game was released. In fact, the problem is so acute that the game will freeze if one is logged into the Playstation Network when loading the game. Some gamers were reportedly able to work around this issue by buying some other shovelware from the same company, the theory being that it updated and corrected some incompatibility or error introduced by some update into the user’s profile. If that is the case, it was probably a simple fix, and it was really poor form to let a game run fallow so soon after release.
With all of the difficulties outlined here, it is obvious that there are problems with these systems that have gone unaddressed. Despite that, striving for full completion can be quite an enjoyable activity, and I have been surprised to find that it has been even more enjoyable for those games that require help from the boosting world. It has been a very interesting and rewarding experience interacting with other players struggling with these same things, and I have seen nothing but the most altruistic behavior from genuinely appreciative gamers. The average multiplayer experience would be greatly improved if conducted this same way.
Perhaps game makers can start to move away from these draconian multiplayer achievement requirements. And perhaps they can learn what it would take to improve multiplayer gaming as a whole from those gamers who strive to overcome these obstacles. Gamers will continue to achieve in the meantime, and invent create ways to do it.
Computing & Technology Services at SUNY Potsdam, like virtually any IT support shop anywhere, fills a number of roles and provides a wide variety of functions. From direct user support, to administrative programming, physical infrastructure, telecommunications, and hosted network services, not to mention strategic planning for the college across all these areas and beyond. It is my pleasure to work in the Host & Network Services group of CTS. And yet, we face an uncertain future.
The HNS team is responsible for virtually all hosted services at SUNY Potsdam, the datacenters in which they reside, as well as the network (wired and wireless) on which it all runs. Much of what we do is infrastructure, the critical and invisible: the local area network, wireless, Internet 1/2 connectivity, DNS, LDAP directory services, all aspects of the web, email scanning/delivery/storage, datacenter power and cooling, remote and off-site backups, virtualization, network access control, storage engineering, filesystem management and provisioning, various forms of clustering beneath Banner/BearPAWS, email, LDAP, etc. We also provide and manage many of the local network applications for the the campus: printing, file-serving, calendaring, LMS Blackboard/Moodle, Webwork, PACES services, library applications such as Illiad and Webproxy, the RT tracking system, antivirus, VPN, etc, and dozens of other highly specific campus/office-use applications too numerous to name. And of course, an untold number of behind-the-scenes monitoring and management systems developed to administer the environment.
For over ten years, HNS has prided itself on a commitment to open-source and locally-developed software. This focus has fostered innovation and local expertise in numerous disciplines across the enterprise, and provided huge year-over-year cost-savings. We pay almost nothing for operating-system licensing, basing our entire operation on the open-source operating system Linux. We pay nothing for our virtualization infrastructure, making use of open-source tools, and locally developed methods. Our storage paradigm is based on a novel, cost-effective technology that scales infinitely. Time and again we have chosen to innovate to exceed expectation rather than purchase to meet expectation: developing the knowledge necessary to handle and keep pace with the complexities of a system rather than buying expensive black boxes to put on the network, developing code locally for automation and integration with other systems as opposed to approaches that would have cost tens-of-thousands of dollars, building our own solutions and taking advantage of novel concepts for a fraction of the cost of contemporary solutions, and in general, putting a premium on understanding, knowledge, automation, and innovation.
Aside from the raw cost savings from many of the directions we have chosen, this knowledge commitment has allowed us to keep pace with increasing demands using finite resources. Ten years ago: dozens of servers, hundreds of services… 4 staff. Now: hundreds of servers, thousands of services, plus wireless, and voip, vending, HVAC and all manner of things running on the network… 4 staff. Flat budget. Despite this, we have continually coded, innovated, and built our way forward to higher levels of efficiency and achievement, staying ahead of the curve, aiding our staff retention and recruitment efforts, and providing exceptional levels of service to the campus.
Despite being in a successful, stable, position, able to look ahead at new directions and continually improve existing ones, it seems like a watershed moment for HNS. Did we achieve this just in time for obsolescence?
The industry has changed remarkably over the last ten years. Where once the local hosting professionals of the IT support organization were the only option in town, the always-decreasing costs of processing and bandwidth have made remote hosting (or grid, or cloud, or whichever marketing term du jour) an increasingly serious option. The pros and cons of cloud computing have been covered ad nauseum everywhere: widened services, reduced staffing pressure and hardware costs, in trade for loss of control, potential loss of privacy, and new security concerns. In CTS, we have generally not felt a great need to look at outsourcing to the cloud given that we stand to gain relatively little over our current (mostly free) service offerings, and to lose relatively much in privacy, control, certainty.
In addition, over the last year the SUNY Shared Services and Systemness initiatives have come into being. The first, to target some specific campuses for administrative collaboration and re-alignment, in our case merging services with our neighbor SUNY Canton. The second, a SUNY-wide re-evaluation of processes and methods for greater system-wide efficiency and cohesion in all things, not just IT. Though they began somewhat ignominiously about a year ago, the goals are noble and have generally been embraced by the SUNY community. Specific to technology, there is some real direction taking shape on some core ideas around common student information systems, centralized hosting of services, and dis-incentives for not using common applications.
But a system-wide re-visioning to common standards, platforms, and practices will inevitably have a flattening effect: in some aspects, a given campus might gain, and in others it might lose. For instance, a campus struggling to host a service would gain immensely if SUNY decided to offer that service centrally in a standard fashion. But if that campus handled that service with aplomb, highly-customized and tailored to their business practices (as we achieve), it may lose functionality in a central model. In other cases, it may come down to cost. It will be a tough sell if a given directive adheres to centralizing concepts, but is both less functional and more expensive than current campus practices.
For the local hosting team, so far this means looking at dismantling services and possibly lowering the bar to fit it into standard SUNY practice and off-site hosting. There are definitely advantages to be gained for this trade-off, but there has not been much discussion about the effects of this transition on the teams across SUNY that have been providing these services since their inception. In our case, we already see core services (Banner, email, Illiad) targeted for changes that practically remove us from the equation, and could be much more expensive than the local offerings we have honed and perfected over the years, for little or no functional gain. Across SUNY, generalists on the hosting teams often provide behind-the-scenes leadership in technology, and if their function recedes, campuses may find themselves needing to find this leadership elsewhere.
Losing core services from the local datacenter does not bode well for the future of a group like Host & Network Services, where a culture of innovation has led to a great deal of pride and success over the last ten years. In fact, it has been a morale hit to a group that has provided services at a high level for a very long time. It is the nature of this business that when you are doing your job, no one knows you exist. Unfortunately that means we are susceptible to not being noticed when we need to be. We think we have something special to offer to this process, and are trying to work with SUNY to be involved in shaping these central offerings (currently with email). That is something at least, though it may not be enough.
In CTS Host & Network Services, we are responsible for hosting email for the college. At the SUNY ITEC Fall Wizard Conference a few weeks ago, I presented on the solution that has come into being over the last ten years here at SUNY Potsdam. My documentation and the presentation itself are available at http://fritz.potsdam.edu/projects/email. It covers the entire design, its evolution, some of the challenges we have faced, and thoughts on the “cloud.” Ours is a solution based on the combination of locally-developed software and the innovative use of open-source. The only cost is hardware.
I had been behind on my Eclipse upgrades when I upgraded to Eclipse 4.2.0 a couple of months ago. More improvements to an already fresh, crisp, interface. Of note, this is the first time I have used the repository packages:
yum install eclipse-platform eclipse-egit eclipse-epic \
eclipse-pydev eclipse-phpeclipse eclipse-wtp-webservices eclipse-shelled
No sign of problems, and much easier than cobbling together all the plugins I need from their various install sources. Very impressed.
The latest in my upgrade streak to Fedora 17 a couple of weeks ago was relatively painless. Since this Fedora includes the /usr merge, and even the yum upgrade proponents specifically recommend not upgrading with yum for this version, I went ahead and did it the old fashioned way with DVD and anaconda. The only obstacle was an anaconda bug that was easily worked around by commenting out my separate /home mount (and swap) from /etc/fstab. Post-upgrade, a thousand or more updates, and voila a fully-updated Fedora 17.
On to X and my four monitors. Whereas with my upgrade to Fedora 16 I was able to get Xinerama up on the nouveau driver across all three cards/four screens, here my dual-head NV44A Geforce 6200 AGP card would not cooperate. The resolution on the second head would never set right and it was stuck in clone mode. A brief foray with the RPM Fusion kmod-nvidia-173xx driver seemed more trouble than it was worth, so it was back to nouveau with Xinerama off. I do not really miss it, and I was having some performance issues that may have been Xinerama-related in Fedora 16 anyway. Looks good, and the /usr merge was long in coming.
On the eve of the next Fedora release, I thought it was time to upgrade to this one:
yum --releasever=16 --disableplugin=presto distro-sync. After converting the MBR to grub2, I rebooted with no issue. Not surprisingly, the next step was dealing with X and my four monitors on three Nvidia graphics cards: two middle monitors on a dual-head GeForce 6200 NV44A AGP card, and two outside monitors on a pair of GeForce FX 5200 NV34 PCI cards. Always interesting.
I have been running the nouveau driver the last few releases with no problems, but this time as soon as nouveau took over during kernel boot, the second monitor on my dual-head card went dark: the DVI output. It is detected though, as windows can be dragged into it, and xrandr -q confirmed it is there. It is basically a black hole. Booting with only that monitor does not work either. I even booted my remaining Fedora 15 kernel into X. I had almost immediate graphics problems, but all four screens light.
I installed the proprietary kmod-nvidia-295 driver (GeForce 6+) from the RPMFusion repository. After manually adding the necessary
rdblacklist=nouveau nouveau.modeset=0 bits to grub2, rebuilding initramfs with dracut, and rebooting, that lit up both middle monitors. Unfortunately, that leaves my outer monitors dark on the legacy graphics cards. Before nouveau, what I had done in the past was to install the legacy kmod-nvidia-173xx driver (GeForce 5) to drive both the GeForce 5 and the GeForce 6, a lucky compatibility coincidence. I cannot even try that here though, since the legacy drivers have not been updated to work with xorg 1.11 in Fedora 16.
Along the way, I tried a few things to see if that DVI output could be kicked to life, despite the fact it is not an xorg problem. I gave the strangely named ZaphodHeads option a shot. It treats each card output as a separate screen, allowing for maximal flexibility in layout. Great name, and neat idea, but all combinations I tried only lit up one screen. I even tried nvidia and nouveau together, but that has never and will never work. I was surprised to see X even start, though only with my middle screens lit up.
Going back to nouveau, I had to remind myself to remove the blacklist bits from grub2 after uninstalling nvidia. I had done this once before: the machine boots, and then once nouveau takes over, all graphic updates to the screen cease. I took a detour into rescue mode and the wonders of the new systemd init system before realizing that my machine really was not hanging doing a filesystem check, it just looked that way. This is a tangent, but troubleshooting systemd is a bit tortuous. All of the documentation indicates you should use the systemctl command for most functions, but since that affects the running systemd, it is all but useless in a rescue boot. If in a chroot, at least it says as much. And mucking around with the systemd filesystem to disable services is made difficult by the dependency chaining. I am reserving judgement, hoping this is a change for the better and not just a change for the change.
So I am back to nouveau, down one monitor. On the plus side, Xinerama works again to combine all screens from multiple cards into one continuous desktop. Support for this function has finally surfaced in nouveau (disables XRandR), long-missing for my hardware in Fedora. Glad to have it back.
It was time to install Mass Effect 3, the latest chapter in the sweeping cinematic RPG series from Bioware. I loaded up Steam to purchase the game, as I had for the first two titles, only to find it is not available. Amazing. Your only option for digital download is to download the Origin beta game client. Ick.
Capitulating, I went over to EA.com to purchase the game. When I logged in with my email address, I was forced to pick a username for a new Origin master account. Unfortunately my usual standby “fritzhardy” was already in use (I figured by me under another email address). Thinking I could probably tidy things up later, I created a “fritxhardy” account. This was a mistake.
It turns out that EA, Origin, and the Bioware Social Network (BSN) are all linked, or being linked, together under an Origin master account you create. It was not immediately obvious what this meant, and I did not realize the full ramifications of this until I went to login to the BSN to review previously purchased ME1 and ME2 content. That account became “fritzhardy.” Now I have content spread across two accounts. Ugh.
I held off playing until I could merge the accounts under “fritzhardy.” Only a very liberal reading of the BSN account help page indicates this is possible, but I pressed that with EA support. It took five emails and two instant chats with five different support techs, but after four days, I finally had my purchase nullified on one account and re-activated on the other. You would have thought I was coordinating a shuttle launch. I finally landed on a helpful tech in an instant chat that took care of everything.
Into the game now, I had a brief issue launching the game due to the Origin account change: “Invalid Cerberus code” after the title screen. Re-install, no fix. Re-entering my purchase code in the Origin client seemed to do the trick, or possibly something tech number five did. Now to import my ME2 save after copying my saves into the Document folder. The career imported perfectly, but my Shepherd face did not. Sigh.
Bioware has acknowledged a face import bug that prevents ME3 from recognizing the “face codes” created in ME1. Swell. There is a workaround until the bug is fixed. In summary, I had to go to the ME2 Save Editor, upload my save file, click morph head, export a YAML file, import the file at Mass Effect Tools face code generator and voila: most of the information I need to reproduce my Shepherd face in the in-game customize face dialog.
Perfect, now I can fire up a game with my Shepherd intact. And the screen is filled with artifacts, the game stutters and pauses for minutes at a time, and eventually forces a computer reset. For god’s sake.
My graphics card is an ATI Radeon HD 4870, and I have written previously about a workaround for the PowerPlay problem to which cards of this vintage are susceptible. That was on Windows XP. This is a new install on that same hardware with Windows 7 64-bit, and RivaTuner cannot reach down far enough into the hardware to set the clock. Running the venerable GPU-Z confirmed my GPU clock was all over the place. I updated to the latest Catalyst driver knowing it would do nothing. There has never been an official fix. Thanks ATI.
The only real solution is to hack the graphics card BIOS and forcefully set the clock speed to the desired value. This video offers a complete walkthrough of the process using the Radeon Bios Editor and ATI Winflash to download, edit, and re-flash your card’s BIOS. I found I also had to disable OverDrive in the Catalyst Control Center before the clock was pegged at my settings.
So now I had a working game, but I was not quite done. It galls me that EA/Origin had the temerity to launch an A-list title like ME3 in beta-quality software like the Origin game client instead of Steam. It is completely featureless, and one feature I really want is one-click screenshots. Well, with a couple of clicks in Steam, I added it as a local non-Steam game in the client. Now I launch the game in Steam, which launches Origin, which launches ME3. I have both overlays available with all of their attendant keyboard shortcuts, including the all important F12 to take Steam screenshots.
On to Mass Effect 3. Finally.
« Previous entries Next Page » Next Page »