THE WEB PAGE has always been a creature of dual personality. To an author it looked like a document with an address that could be bookmarked, linked, or sent in e-mail. To a programmer it looked like a software application or an information service that could be scripted, pipelined, and used as a component. From both perspectives, the URL was the key that unlocked the magic. In 1997, author Andrew Schulman marveled: "Every UPS package has its own homepage on the Web!" Exactly. That early Web service formed a URL name space that was useful, in different ways, to humans and to software.
A critique of the Web services movement, which emerged last summer and flared up after the release of Google's SOAP (Simple Object Access Protocol) API, reminds us that the Web's convenient duality was no accident, and shouldn't be taken for granted. At the center of this critique was Roy Fielding, who chairs the Apache Software Foundation and co-wrote a number of the Web's core specifications. Fielding's doctoral thesis, published in 2000, identified an architectural style called REST (Representational State Transfer) as the basis of the Web. Resources are the currency of REST. It uses URIs (uniform resource identifiers) to exchange resource representations (documents), by means of a handful of core methods such as HTTP GET and POST. Network effects don't just happen, the RESTians argue. The Web's success was a direct result of the simplicity and generality of this approach.
When Google offered its SOAP API, REST proponents argued that it had, in some sense, seceded from the Web. "[Google] deprived the Web of approximately a billion useful XML URIs," wrote REST proponent Paul Prescod. What ought Google have done to satisfy the naysayers? One undocumented solution, since discontinued, was to support URLs such as http://google.com/xml?q=roy+fielding, so that a simple HTTP GET request would return a package of XML data. Does this kind of thing qualify as a Web service? Absolutely. To see why, consider how a similar kind of service, news syndication based on the RSS (Rich Site Summary) format, has fared.
RSS channels, including those launched at InfoWorld, are simply XML files fetched like ordinary Web pages. Some are generated by software, others are written by people. Subscribers to these channels are, likewise, both programs and people. Web sites running RSS aggregators collect channels and transform them into readable HTML. More recently, personal aggregators such as the one in Radio UserLand are putting the program selection into the hands of individuals. The fact that software "calls" the same URL that a person links to, or sends to a friend in e-mail, goes a long way toward explaining why RSS is one of the more widespread and popular applications of XML.
The RESTful nature of RSS can have surprising consequences. For example, Macromedia recently launched an XML-formatted but non-RSS news feed on its developer site. The point of this exercise was to showcase innovative Flash renderings of XML content. Arguably, Macromedia should have provided an RSS rendering of the feed. But the omission was easy to rectify, thanks to another REST-savvy service offered by the W3C (World Wide Web Consortium). The W3C provides a URL-accessible XSLT (Extensible Stylesheet Language Transformations) transformation service. You can use it to transform URL-accessible XML files using URL-accessible XSLT files.
It was simple for me to combine these pieces (see graphic on page 15) to create a new service -- namely, an RSS-formatted version of Macromedia's feed. It involved two acts of programming. One was to write the XSLT file and post it in a public place. The other was to form the URL that invokes the W3C transformation service, passing it Macromedia's XML file and my XSLT file, and post that URL on my Weblog.
The key benefit of this scenario is what the RESTians call "low coordination cost." Google's SOAP API was accessible only to SOAP-aware toolkits, not people reading and writing e-mail and Web pages, and not conventional Web scripting tools.
Although this was true in Google's case, it would not have been true in every case. As it turns out, several SOAP toolkits (including Microsoft's .Net Framework and The Mind Electric's GLUE) can automatically make SOAP services available as URLs accessible to HTTP GET.
After I made this observation on my Weblog, Tim Bray, who co-wrote the XML specification, passed it along to the W3C's Technical Architecture Group. "Someone at the W3C should write down a canonical 'right way' to do this," Bray wrote on the group's mailing list. There shortly followed the unofficial document "SOAP HTTP GET Binding," from BEA's David Orchard, and a renewed conversation about how to put the Web back into Web services.
It was a good week for all concerned. What had come to be called "the REST vs. SOAP/RPC" debate had been generating too much heat and too little light. Happily, the pragmatic engineers who build SOAP toolkits cut to the chase. Not every SOAP call can or should be represented as a URL. But many can with little effort. That's an opportunity that makes great sense to exploit, as some SOAP toolkits already do.
The REST/SOAP rapprochement seemingly at hand will not settle the argument. RESTians see through the lens of hypermedia, while SOAPistas wear client/server spectacles. There will be lively ongoing debate about how to extend the Web, which remains primarily a hypermedia application, into the distributed computing realm where Web services live. Hopefully, though, the debaters now stand on common ground. Out of a flurry of architectural hand-waving, one bedrock principle has clearly emerged: Links matter.