Year in review: Web services, virtualization, and edge computing

These three technologies helped ease information accessibility and made data more flexible in 2002

Web services/XML/SOAP

News perspective: In 2002, Web services advanced on two major fronts: development of tools to extend Web services functionality and standardization of the back-end plumbing required to make Web services work.

Yet deployment of Web services is viewed as being in its infancy.


Google and Macromedia released technologies intended to make Web services more Web-friendly. Google in April released Google Web APIs service, which uses SOAP and WSDL to enable developers to query more than 2 billion Web documents accessible from the Google search engine via their own computer programs, according to the company.

Macromedia's ColdFusion MX moved ColdFusion from a proprietary application server to one that works with J2EE application servers and makes it easier to develop Web applications and services.

The new version of the software allows script writers, rather than developers, to create Web services and applications.

Also from Macromedia this year was Macromedia Flash Remoting MX, for building rich Internet applications and enabling developers to access Web application services such as EJBs, Microsoft .Net components, ColdFusion components, or SOAP-based Web services.

Web services made inroads with standards, but work remains to be done in areas such as choreography, the interaction between Web services for applications such as e-business, and in security. Competing proposals on choreography were offered in 2002 from IBM, Microsoft, and BEA Systems, which partnered on BPEL4WS (Business Process Execution Language for Web Services), as well from Sun Microsystems, with its WSCI (Web Services Choreography Interface), which curiously also is supported by BEA.

For enterprise application vendors, 2002 was a year of warding off Web services' threatening potential; with Web services, enterprises could build composite applications by cherry-picking vendors' offerings while tightening internal suite integration.

SAP launched a new generation of cross functional applications (xApps) designed to run across multiple existing applications and information sources, driving end-to-end processes across heterogeneous systems. PeopleSoft fired back with AppConnect, a suite of pre-integrated portal, integration, and warehouse solutions leveraging Web services to reduce integration costs.

Meanwhile, EAI tools, business process management platforms, and messaging middleware took the stage, while newer technologies including Web services, JMS (Java Message Service), and proposed standards such as BPEL4Ws got breakthrough roles as facilitators of simpler integration.

Best of breed companies also maneuvered to embrace Web services to make their wares easier to integrate to back-end systems. Siebel and Microsoft, for example, announced a new agreement to integrate Siebel 7 e-business applications with Microsoft's .Net enterprise server family.

With the economy stalled, IT executives sought more ways to leverage their existing systems by tying together disparate applications and automating business processes.

Microsoft began making inroads with BizTalk Server 2002, while IBM went on the offensive, acquiring a raft of companies with coveted integration and process automation technologies, including CrossWorlds and Holosofx. The result was pressure on traditional EAI specialists, such as Vitria, SeeBeyond, and Tibco, which saw sales stagnate.

Test Center perspective: In November the W3C posted a first working draft of a document titled "Web Services Architecture." It candidly admits what has become impossible to ignore: The Web services movement is cooking up a stone soup made of all sorts of ingredients. Distributed objects, remote procedure calls, electronic data interchange, message routing, and application integration are being tossed into a cauldron that was already bubbling with the protocols, formats, and addressing schemes of the World Wide Web.

The World Wide Web itself, as it turns out, does exhibit a consistent architecture and vision. By mid-2002, the benefits of Web-style resource description, service invocation, and document exchange were widely hailed. One key metric: the term "Web-friendly," which appeared nowhere in the SOAP 1.1 specification, shows up nine times in the SOAP 1.2 primer. All's well that ends well.

Putting the Web back into Web services wasn't the year's only accomplishment. There was also excellent progress on an issue that's been called the Achilles' heel of Web services: security. In July, at the Burton Group Catalyst Conference in San Francisco, a flock of vendors demonstrated the exchange of authentication and authorization data using SAML (Security Assertion Markup Language), a standard that the Organization for the Advancement of Structured Information Standards (OASIS) ratified in November. Early concerns that SAML might compete with the IBM/Microsoft WS-Security specification dissolved as it became clear that the two specs are, in fact, nicely complementary.

SAML and WS-Security are modules that interconnect with each other and with related modules, such as XML Signature and XML Encryption, in a loosely coupled manner that has come to define the emerging Web services architecture. Another nice example of modular reuse is XML Schema. It is used in WSDL to describe the types of data exchanged in SOAP transactions. In the forthcoming Office 11, XML Schema can define and control the data that users manage in ordinary business documents.

Will SOAP transactions rattling among Web services endpoints and XML business documents handed around by users turn out to be the same? Yes and no. It's true that a document-oriented and Web-friendly style will help us build distributed business systems more easily, and expose them to people more effectively. But the original Web architecture doesn't meet all the requirements of a process-oriented and transacted business Web. Routing, reliable asynchronous messaging, process workflow, compensating transactions, and granular end-to-end security remain key challenges.

The Web's architecture did not spring fully-formed from the mind of Tim Berners-Lee in 1990. After a decade of evolution, though, Roy Fielding was finally able to describe it.

We should expect that Web services will follow the same arc. We don't know its final curvature yet, but we do know that value can be delivered at all points along the way.


News perspective: One of the most ballyhooed technologies of 2002 was virtualization. Although overhyped for the past five years, virtualization is set to become more important in 2003 by making discrete physical systems or components appear as one in a logical view.

Storage is the first segment to benefit from this technology; the storage virtualization advantage is having a single platform to manage, access, and provision storage, regardless of which vendor created the array.


EMC, Hitachi Data Systems, Sun Microsystems, and IBM have been marketing virtualization technologies within their storage arrays for some time, but demonstrable steps in 2002 allowed virtualization across heterogeneous systems. Sun got deeper into the game in September when it acquired startup Pirus Networks and its storage switch, which enables various devices and servers to be managed via a single interface, including both block and file-based storage.

Along the same lines, FC (Fibre Channel) switch maker Brocade acquired Rhapsody Networks, which built a switch to support multiple protocols, including iSCSI. Storage management applications, including load balancing, storage pooling, and volume management, can be written to the switch and are integrated with the switching technology to help manage the network.

These approaches are a marked departure from traditional management done from a server-a cumbersome approach and tiring task.

Meanwhile, IBM introduced its virtualization vision with the pending Storage Tank software. Like EMC's forthcoming AutoIS software, HDS's TrueNorth initiative, and Sun's N1 project, Storage Tank is storage management software that virtualizes enterprise storage, regardless of vendor.

The concept of virtualization also appears poised to further penetrate the network, specifically in the datacenter. Startups InfiniCon Systems and Topspin Communications have brought virtualization to datacenters by way of shared systems that are packed full of computing resources for servers to leverage and share. The basic idea is to provide servers with connectivity to Ethernet and FC networks on a shared platform, rather than populating each server with expensive FC host bus adapter cards and Ethernet network interface cards.

Test Center perspective: How did storage virtualization progress in 2002? It was a very good year but could have been better.

We saw many vendors making an effort to clarify their strategy and open up their solutions.

But the very concept of storage virtualization is still confusing, and not just for customers. To date, there isn't a commonly accepted, comprehensive definition.

Obviously, by talking to vendors about virtualization a potential customer will get a more vibrant picture, with emphasis on "cost containment," "effective resource utilization," and, inevitably, ROI. Nevertheless, a common mistake that many vendors make is painting storage virtualization as a panacea that will cure any storage illness, which is utter nonsense. Even the best storage virtualization system doesn't remove the need for sound storage administration policies.

The truth is that more storage means more administrative costs and capital investment, regardless of the architecture, and using networked storage can be less expensive in the long run. Storage virtualization and management are tightly related and should be seen as the two faces of the same coin.

Unfortunately, (or maybe we should say "luckily") storage coins come from a variety of mints and the complexity of our storage systems is likely to increase rather than not in the immediate future.

Emerging technologies such as iSCSI, disk based backups, and serial ATA solutions will add new management challenges to those already complex storage layers.

Pushed by attractive economic rewards, the behemoths of storage hardware will gradually extend their target market from top-tier companies to the midlevel and entry-level range, which will exacerbate competition between their storage management/virtualization solutions and those supplied by software-only vendors.

Increased competition is good for the buyers.

Another positive development could come from vendors finally adopting the CIM (common information model) standard, which will make storage devices speak the same language, regardless of their maker.

Over time, the concept of virtualized storage won't die, but it will be more precisely applied to specific components of the storage puzzle, such as virtual tape, virtual path, or virtual volume -- unambiguous phrases that everybody should understand.

Removing the virtual confusion from our lexicon could be one of the most important achievements for 2003.

Edge computing

News perspective: Edge computing made headway in the enterprise in 2002, reemerging as a serious corporate contender and a strategic complement to centralized applications and systems.

"Now people are recognizing that too far in either direction is not beneficial. What makes sense is to apply the right level of centralization or decentralization depending on the business activity," says Dana Gardner, research director at Aberdeen Group in Boston.

Illustrating this trend, Microsoft teamed with collaboration vendor Groove Networks to unite SharePoint Team Services and Groove's ad-hoc Workspace technology, giving users centralized control of file sharing, discussions, and document collaboration with a secure, decentralized model that pushes business and application logic to the network edge.

"Groove solves another set of problems. When people are disconnected from the network, [it can] extend application logic to the client," said Andrew Mahon, director of product marketing at Groove.

"By making a copy of a SharePoint space in a Groove Workspace, I'm extending [that application] to the edge of my organization and beyond," Mahon explains.

Meanwhile, identity management is intersecting with edge computing to define policies for controlling data and to facilitate intercompany information sharing, according to Gardner.

"Control of identity management and policy for each individual coming at same time as the ability to exert a hybrid model of centralized and decentralized [computing] is bringing new opportunities to control how information is distributed and managed, not just confined within the organization but to be more permeable across corporate boundaries," Gardner says.

Going forward, hybrid computing models will help drive productivity and bolster the value of networks, Gardner adds.

"You don't have to have a fortress edge anymore-- that makes an organization into an island. Managing and extending the edge that is [what] will drive good productivity boosts over the next couple of years," he explains.

1 2 Page 1
Page 1 of 2