Is it time to scrap your Big Iron?

Mainframes have a storied history in the enterprise, but today's distributed server environments could herald the final chapter

See correction at end of article

For 10 years, IT managers have heard the promises: cheap server farms will replace expensive mainframe systems, lowering costs and improving competitive advantage through modern applications. And for 10 years, it hasn’t happened.

Vendors are still feeding IT the same sales pitch. This time around, it’s coming from the likes of Hewlett-Packard, Microsoft, and Sun Microsystems, all companies with vested interests in selling their own boxes and legacy migration tools. Can these modern solutions really deliver on promises first made a decade ago?

As it turns out, the answer is a qualified yes. Distributed server systems can, in fact, replace the mainframe at a lower cost, especially in organizations running lower-end mainframe systems that offer 500 or fewer MIPS (millions of instructions per second) of computational power, according to Forrester Research analyst Phil Murphy. As an organization’s IT infrastructure scales up, however, the answer is less clear, notes IDC Research Director Steve Josselyn. Some large organizations find the mainframe to be a much more efficient and economical platform, whereas others realize dramatic cost savings by migrating away.

A Migration Triple Play

According to Ted Venema, a consultant at legacy modernization vendor BluePhoenix, when an organization does decide to move applications from the mainframe, it typically faces three migration challenges: the hardware platform, the database system, and the application development language.

The Danish Commerce and Companies Agency (DCCA), which processes business registrations and shares data with the tax agency of Denmark, migrated all three aspects of its mainframe environment this year. DCCA’s legacy transaction system was based on the Adabas database and the Natural 4GL (fourth-generation language) from Software AG, running on an IBM System 390/MVS mainframe. In a typical month, the agency would run some 800,000 transactions over approximately 2,700 applications.

47FEmainmigrate_in.gif
Click for larger view.

Given the Adabas/Natural platform’s 35-year lineage and diminishing market share, DCCA was concerned that its legacy environment would have few support options as time wore on. According to project manager David Graff Nielsen, the agency already had to rely on an outside firm to manage its code. Moreover, DCCA wanted to move to a more Web-enabled transaction environment, which would allow businesses to register and update their information over the Internet -- something the Adabas/Natural platform did not easily support.

So the agency moved its applications onto 16-processor x86 application and database servers running Suse Linux and Oracle 9.2i. The new platform is at least 25 percent cheaper to operate and maintain, Nielsen says, freeing up money and people for the agency’s goal of improving and Web-enabling its services, instead of merely reducing costs.

DCCA hired BluePhoenix to translate roughly 1 million lines of Natural code into Java and convert existing IBM JCL (Job Control Language) code to Korn shell scripts, using automatic tools developed especially for this project. That translated code isn’t particularly object-oriented or well-formed, Nielsen says, but it functions well, and most importantly, it’s accessible for maintenance and fine-tuning. The new system uses the network more heavily because of increased traffic between servers and to support 3270 terminal emulation for connections to some outside systems. Nielsen, however, says this was a manageable increase.

Swapping out mainframe hardware, code, and databases at the same time introduces a lot of risk. Nielsen says a key factor in ensuring that DCCA’s transition ran smoothly was avoidance of translating and changing the code’s functionality simultaneously, as had been done in a previous, failed migration effort. Such changes would introduce too many variables, he says, making it difficult to verify that the new code was correct. By doing the translation first, the agency could rework the translated applications later, optimizing them and adding new functionality -- in the meantime it could still run its business using the translated code.

Sabre Pushes the Limits

Sabre Holdings -- parent of the Travelocity online consumer booking service and the Sabre travel reservations and ticketing system, which handles about 40 percent of worldwide travel reservations -- is in the midst of one of the largest mainframe migrations. Todd Richmond, the company’s vice president of enterprise architecture, says Sabre has the world’s third-largest implementation of IBM TPF (Transaction Processing Facility) mainframes. In an effort that began almost six years ago, however, Sabre has migrated most of its domestic booking services to four-way, Intel Itanium-based HP NonStop servers and a cluster of HP Integrity Itanium-based servers running 64-bit Red Hat Linux and the MySQL database.

Initially, Sabre’s IT managers thought they would have to migrate everything to the costly NonStop servers, but in the past year they discovered that they can use standard x86 servers for less-intensive work. Sabre will continue to use NonStop servers for database transactions because they are able to process the 14,000 transactions per second more reliably across large data sets typical of Sabre’s environment.

47FEmain_ch.gif
Click for larger view.

Sabre’s road away from the mainframe has not been easy, and the project is still several years from completion. This year, the company encountered unexpected problems in managing its server farms. “It’s our No. 1 challenge,” Richmond says, adding that Sabre had to build a lot of middleware to replicate the mainframe’s end-to-end monitoring and self-management capabilities. “There are more hops now, so we have to be diligent about latency.”

Sabre still experiences periods when reliability isn’t the same as it was on the mainframe, Richmond says, but it has gained the advantage of much shorter development windows -- perhaps half as long -- owing to the combination of the move away from assembly language and the use of desktop development tools. Richmond’s staff has also been able to code functions such as calendar-based flight availability in C++ and Java, which he believes could not have been done using mainframe code.

Richmond says Sabre expects to transition its international and multiroute domestic services before the end of the year. The step will allow the company to retire one of its three TPF datacenters, each of which contains about eight mainframes. During the next 18 months, Richmond expects to migrate Sabre’s core passenger itinerary service to the distributed system as well, eliminating a second TPF datacenter. That will leave only the master transaction database. Richmond thinks he may need to stick with IBM TPF for that one, at least for a while, as HP isn’t yet certain it can deliver the TPF-level fault-tolerance that that database needs.

All told, this ambitious, multiyear migration effort costs “a significant percentage” of Sabre’s annual $150 million IT budget, but Richmond says it’s well worth it. He says costs are already less than half of what they had been, mostly due to savings in per-transaction charges for the TPF facility.

Small Shops the Clearest Winners

The DCCA and Sabre Holdings examples show the longer-term promise of mainframe migration for large IT infrastructures. But according to Forrester’s Phil Murphy, smaller shops -- especially those that use their mainframes primarily as application servers -- can make the best case for migration today. These small-scale mainframe environments are typically less complex than one like Sabre’s, for example, yet they still cost significantly more than equivalent distributed systems.

Vestcom, a company that prints and distributes financial statements on behalf of banks and brokerages, is a prime example. A quarter of its IT budget went to pay for hosted services on an IBM System/370 mainframe running VMS, which provided about 150 MIPS of computing power.

Vestcom used the mainframe to download huge quantities of data, apply various formatting and calculation rules, then create and print the statements. But as revenues plummeted following the dot-com crash and a series of accounting scandals, the company needed to lower costs quickly. Vestcom CIO Joe Mislinski says the mainframe was an easy target.

Mislinski hired Micro Focus to port Vestcom’s Cobol code to a quad-processor x86 server. Sticking with Cobol as its application language allowed Vestcom to reduce the number of variables in the transition, but some things just didn’t translate.

Case in point: The mainframe could handle very large jobs and recover at any point in case of printer failure. In the new environment, however, tasks were distributed as several parallel sub-jobs, each going to independent printers. If a printer failed, there was no master job image that the software could use to recover from the interruption point.

“But none of this was insurmountable,” Mislinski says. Vestcom ended up creating its own management tools to track the status of each sub-job on each printer, making it possible to reconstruct job status in case of failure at any point.

IDC analyst Josselyn notes that transitions away from the mainframe typically encounter such issues, as job management and recovery were solved in the mainframe world long ago and are now often taken for granted. Vestcom was helped by the fact that it found a hosted-services vendor that supported both the mainframe and distributed-system environments. That way, if the migration didn’t work it would still have a backup option.

Vestcom spent $1 million on its mainframe migration, which Mislinski says he’ll recoup in two years, based on operational savings. The modern PC interface also enables his operations staff to respond to customer requests in just half the time it took previously, he estimates. Plus, having the code on a Windows platform means his staff can take advantage of visual development environments. It also means his Cobol programmers are now working in the same environment as his C# coders, so they can cross-train each other.

No Mass Exodus

Despite these companies’ successes, migrating away from the mainframe is expensive and risky for most organizations. That’s why IDC and Forrester Research see real interest in less than 5 percent of enterprises surveyed (see “When mainframes make sense”). It’s also why IBM still gets nearly $5 billion a year in zSeries mainframe sales, notes IDC’s Josselyn. He says the attitude he encounters most from IT is, “If it’s not broken, why fix it?”

Josselyn says IBM has adjusted some of its licensing fees to address the issue of high licensing costs. And, Forrester’s Murphy advises, even users of orphaned mainframe technology -- languages such as PL/I or hardware such as ICL’s or Bull’s -- can still modernize their mainframe technology, rather than dump it, by deploying Cobol, C++, and Java applications on Unix partitions. That strategy at least will let IT rationalize its environment and make any later transition to distributed systems easier.

Also, although the desire to move from legacy programming languages with a dwindling supply of developers is a common secondary motivation for mainframe migration, both Murphy and Josselyn note that Cobol developers' salaries have not increased in recent years, indicating no shortage.

Meanwhile, however, system vendors have dramatically improved the reliability and scalability of distributed systems to the point that enterprises can consider x86- and RISC-based servers running variants of Unix, Linux, or Windows for mission-critical applications.

Josselyn says the area in which these platforms still have a disadvantage is in management, because workload, latency, and data management can become difficult as you scale to hundreds of servers. Murphy suggests though, that this situation may improve in the next several years, as vendors deliver better tools and as IT staffs learn to run the systems the mainframe way, using techniques such as virtualization.

Enterprises with small mainframes or those that use them primarily as application servers -- the Vestcoms of the world -- are the most likely to begin wholesale migration to other platforms. Larger organizations are more likely to off-load some services and reduce the variety of mainframe systems in their portfolios while still taking full advantage of their available mainframe MIPS to keep per-transaction costs down, says Mike Gilbert, vice president of marketing at Micro Focus.

The first step is to decide which kind of organization you are, then to really think through what should continue to run on the mainframe and what should be migrated elsewhere. The good news is that the time is ripe to finally make that migration happen. The Big Iron Age is by no means over, but the first signs of the Distributed Age are definitely here.

Correction:
In this article, we should have said that Sabre Holdings' HP NonStop servers use Intel Itanium 2 CPUs. In addition, Sabre's Red Hat Linux and MySQL software runs on a separate cluster of HP Integrity servers, also based on Itanium 2.
InfoWorldregrets the errors, which have been corrected.

Join the discussion
Be the first to comment on this article. Our Commenting Policies