Opening up the mainframe

Mainframes hold a fortune in business rules, and companies are exploring new ways to spread the value around.

Back in 1991, InfoWorld's then Editor in Chief Stewart Alsop predicted the plug would be pulled on the last mainframe in five years.

Oops. Eight years after their forecasted demise, mainframes today host, by most estimates, the majority of business transactions and enterprise data. IBM, which now enjoys a virtual monopoly on big iron, sold $6.8 billion's worth in 2003, a year that saw sales of IBM zSeries mainframes (whose top-end models go by the nickname "T-Rex") jump 33 percent.

Why do dinosaurs thrive even when their MIPS, memory, and storage cost magnitudes more than those of Unix or Windows servers? Decades of capital investment, monumental reliability, and mind-blowing switching costs are the usual explanations. But the most compelling reason of all is mainframe applications, most written in Cobol or PL/I, have amassed enormous business value.

Brian Safron, program manager of enterprise transformation at IBM, likes to talk about the "incredible depth of business knowledge" embedded in mainframe applications.

 "When you try to reproduce [on another platform] an application that has captured 10 or 15 or even five years of business logic … you're going to go through several years of pain just to get the [new] application back up to the point where it was," Safron says.

Rather than retire big iron, enterprises seem more intent than ever on disseminating the core business value that Safron extols to a wider audience -- and a wave of new legacy integration technologies has arrived to do just that. Today, as companies draw up plans for SOAs (service-oriented architectures) that treat applications as reusable services, mainframes promise to play a central role. But the key is in selecting the right integration tools for exposing mainframe-derived services.

That selection is no easy task, according to Jake Freivald, marketing director at integration software provider iWay. "Most vendors have a particular view of things because they do a certain type of thing really well and they don't do the other things very well. And that's a real problem," Freivald warns. "People don't understand the range of options that are open to them due to vendor hype."

Going Through the Front Door

One of the least-hyped options is also the quickest and easiest to implement: deploying software that emulates a 3270 mainframe terminal and rolls mainframe apps into Web apps. The most primitive variant, where the software scrapes 3270 green screens and pops them into the browser, has been around since the mid-'90s. But new tools from companies such as WRQ and NetManage are taking this approach to a new level.

Benoit Lheureux, a Gartner research director, notes that the terminal emulation approach is mostly foolproof. "It doesn't matter how freaking old the application is," Lheureux says. "If there's a 3270 stream you can tap into it and use it for input and output." In addition, he says, it's nonintrusive: Mainframe personnel don't need to be involved in development, and all the business rules baked into the mainframe application that protect the database remain in effect. The downsides are performance limitations and "brittleness" -- if a mainframe app changes without warning, any apps that depend on the 3270 connection break.

WRQ's Verastream product is an example of how sophisticated the terminal-emulation approach can get. Robert Vettor, senior business technologist at Raytheon, says he used Verastream to consolidate "a fairly ugly mainframe screen set that was about 25 years old" into a Web app that tracked arriving shipments inside the company. Previously, users needed to wade through approximately 20 green screens. Now, "the users stay on one page the whole time," Vettor says. To avoid security risks, with each user session, the Verastream server fires off generic log-ons that provide access to only those screens necessary for the tracking app.

Vettor particularly likes being able to publish the app as a Web service on the Verastream server. Although one developer could easily build the front end of the tracking app on Microsoft .Net, another could build a Java app to consume the same Web service. In other words, Vettor has brought a simple mainframe function into Raytheon's emerging SOA, enabling the tracking application to roll out across the entire company in the coming months. "It worked out for us politically," Vettor says of the project. "It was a quicker way to get it done than trying to work through the mainframe group."

At Blue Cross BlueShield of South Carolina, Director of presentation application systems Bry Curry recounts a similar experience using NetManage's Web-to-Host emulation product. Customer service reps running PCs needed a quick way to access mainframe data about health plan members when they called. A CRM system was considered, but that would have required migrating member data to the CRM database, creating two data stores rather than one. "We have 30 years of systems that are very efficient and are basically mission-critical systems," Curry says. "Why would we dump that data to Web servers and replicate all the business rules?"

Instead, Curry chose NetManage Web-to-Host to re-arrange mainframe screens into a pleasant, browser-based app. "I think what distinguishes this application from many others is while it's very sophisticated, 90 percent of what we did was navigation logic, not business logic," Curry, who was able to deploy six months after design approval, says. That development cycle included a bit of mainframe developer time to tweak application performance, because calling up customer records within five to 10 seconds was critical. But the result was a working app that satisfied almost everyone.

Talking to the Transaction Layer

Although terminal emulation solutions may go as far as exposing 3270 streams as Web services, they're primarily designed for human interaction through a browser. Communication between mainframe and other server-side applications requires integration at the API level. Mainframes are basically huge transaction engines, so other apps typically integrate with legacy CICS (Customer Information Control System) or IMS/TM (Information Management System/Transaction Monitor) transaction processing applications.

Lheureux sees a clear trend among enterprises toward transaction-level legacy integration. "When there's an opportunity to actually use an API … rather than doing screen-scraping, they'll do that in a heartbeat," he says. "The mainframe is being surrounded by all these J2EE and Windows and Linux servers, so there are many new opportunities to come in through an adapter and get a higher-performance interface."

Major middleware players -- BEA, IBM, Iona, SeeBeyond, Software AG, Sonic Software, Tibco, Vitria, and webMethods -- have long claimed to offer interoperability with mainframe transaction systems. But most obtain their legacy application adapters through partnerships with such specialized vendors as Attunity, DataDirect, iWay, or Neon. Application adapters typically have two halves: one that runs on a J2EE or Windows server and one that runs on the mainframe to handle the "nasty business of accessing stuff on the back end," as Lheureux puts it.

Today, most application adapters are bi-directional. "Everybody thinks in terms mostly of something happening in the external world coming into the mainframe" and triggering a transaction, Lheureux says. But mainframes also need the ability to push events out to other enterprise systems. For example, when someone enters an order into the mainframe via a 3270 terminal interface, an ERP package running on a Unix box can be updated immediately rather than waiting for a batch update.

Before the latest bi-directional application adapters arrived, the only product available to provide this two-way functionality was IBM's MQSeries middleware. But iWay's Freivald warns that too many customers assume that legacy integration is done just because they have MQSeries running on the mainframe.

"It's a failure to understand the nature of the problem," Freivald says. "At a minimum, once you connect to the mainframe, you're still going to need something that gets the information off the message queue, talks to the application, handles any errors, reformats the data, and puts it back in the queue. Having messaging middleware [such as MQSeries] is not nearly enough." Productive legacy application integration, he argues, requires configurable, bi-directional application adapters.

IBM sells its own CICS Transaction Gateway, a JCA (Java Connector Architecture) adapter for connecting mainframes with J2EE application servers. The beauty of using adapters to bridge mainframes and application servers, says IBM's Safron, is that enterprises can "keep the functionality with all that inherent subtlety that was captured, keep those business rules intact, but play in an object-oriented world."

But to fully reap the benefits of connecting Java and mainframe apps, Safron says, customers may need to rewrite the Cobol code. Rather than calling it with connectors, the Cobol code can be seamlessly incorporated into the workflow with Java applications. "You're going to make changes that are appropriate to that traditional code that will make it the most ready to be part of a true e-business, on-demand workflow, still using the CICS or IMS runtime engine." This involves refactoring Cobol apps so they can be addressed by Java components as discrete pieces of business functionality.

Safron calls this a "mixed-workload" environment and says that IBM is working on a new development framework. Meanwhile, developers can run IBM's WebSphere Studio Asset Analyzer for Multiplatforms to help determine how "e-business ready" a Cobol or PL/I application is and to determine whether rewriting it and isolating its business components are worth the effort.

Digging Into Databases

Few businesses have the resources or the inclination to invest in refactoring Cobol code. In fact, it's fairly common for organizations to avoid the complexity of legacy application integration entirely and instead go straight to the mainframe database using a database adapter.

Going direct to the database, however, raises the specter of potential database corruption. "The risk of coming in at the data layer is that you're bypassing application logic," Gartner's Lheureux says. "And that's a real risk to the degree that, at the end of the day, the application logic is the keeper of the truth." No surprise, then, that "integration" with mainframe databases tends to be read-only -- typically ETL (extraction, transformation, and loading) operations that copy huge chunks out of the mainframe database for such applications as data mining.

"Data-level integration is easiest on applications that are not that complex," iWay's Freivald says, provided a database adapter can insulate developers from having to understand the variations among mainframe databases. "The more a developer has to know about those different interfaces … the harder it is to be productive," he says. "DB2 is very good and standard. But VSAM [Virtual Storage Access Method] files are a completely different set of calls that you can only do if you're a Cobol programmer, basically." Good middleware, Freivald says, should rationalize those different data structures into a familiar form, such as SQL or XML.

Of course, the mainframe guys monitor such interactions very carefully. For example, Lheureux says, "I don't think any serious designer would think about slamming a purchase order into an order entry system by going straight to the database." Freivald recounts a horror story at one company where business analysts were allowed to make changes to the database. The company was never able to repair the damage and was forced to outsource its entire mainframe operation.

Inviting Mainframes to the Party

No contradiction exists between guarding mainframe database integrity and making mainframe power more available to the enterprise at large. In fact, most analysts say that the trend is to expose mainframe applications and data stores as a set of services in an SOA. The most intriguing scenario takes advantage of the mainframe's power as a huge transaction engine to generate events that affect other systems in the enterprise.

Bill Ruh, chief technology officer and senior vice president at Software AG, cites Delta Air Lines as an example of an organization with an event-driven, mainframe-intensive environment (known as the Delta Nervous System). "When a plane has a maintenance delay, there are a bunch of things that happen as result," Ruh says. One event triggers others across the nervous system.

As mainframes push more events onto enterprise service buses, Ruh believes that one result will be a next-generation SOA that extends beyond the current messaging model. "It's going to be an event-driven model, where a transaction comes in or data comes out and some event triggers that," Ruh says. "The world is moving toward real time."

At its heart, the mainframe has always been a real-time transaction engine. Whether the network talks to big iron via terminal emulation, application integration middleware, or database connectors, the real legacy may be that mainframes have a greater effect on distributed computing than Web apps and SOAs will have on those durable, refrigerator-sized boxes.

Recommended
Join the discussion
Be the first to comment on this article. Our Commenting Policies