THE WEB PAGE has always been a creature of dual personality. To an author it looked like a document with an address that could be bookmarked, linked, or sent in e-mail. To a programmer it looked like a software application or an information service that could be scripted, pipelined, and used as a component. From both perspectives, the URL was the key that unlocked the magic. In 1997, author Andrew Schulman marveled: "Every UPS package has its own homepage on the Web!" Exactly. That early Web service formed a URL name space that was useful, in different ways, to humans and to software.
A critique of the Web services movement, which emerged last summer and flared up after the release of Google's SOAP (Simple Object Access Protocol) API, reminds us that the Web's convenient duality was no accident, and shouldn't be taken for granted. At the center of this critique was Roy Fielding, who chairs the Apache Software Foundation and co-wrote a number of the Web's core specifications. Fielding's doctoral thesis, published in 2000, identified an architectural style called REST (Representational State Transfer) as the basis of the Web. Resources are the currency of REST. It uses URIs (uniform resource identifiers) to exchange resource representations (documents), by means of a handful of core methods such as HTTP GET and POST. Network effects don't just happen, the RESTians argue. The Web's success was a direct result of the simplicity and generality of this approach.
When Google offered its SOAP API, REST proponents argued that it had, in some sense, seceded from the Web. "[Google] deprived the Web of approximately a billion useful XML URIs," wrote REST proponent Paul Prescod. What ought Google have done to satisfy the naysayers? One undocumented solution, since discontinued, was to support URLs such as http://google.com/xml?q=roy+fielding, so that a simple HTTP GET request would return a package of XML data. Does this kind of thing qualify as a Web service? Absolutely. To see why, consider how a similar kind of service, news syndication based on the RSS (Rich Site Summary) format, has fared.
RSS channels, including those launched at InfoWorld, are simply XML files fetched like ordinary Web pages. Some are generated by software, others are written by people. Subscribers to these channels are, likewise, both programs and people. Web sites running RSS aggregators collect channels and transform them into readable HTML. More recently, personal aggregators such as the one in Radio UserLand are putting the program selection into the hands of individuals. The fact that software "calls" the same URL that a person links to, or sends to a friend in e-mail, goes a long way toward explaining why RSS is one of the more widespread and popular applications of XML.
The RESTful nature of RSS can have surprising consequences. For example, Macromedia recently launched an XML-formatted but non-RSS news feed on its developer site. The point of this exercise was to showcase innovative Flash renderings of XML content. Arguably, Macromedia should have provided an RSS rendering of the feed. But the omission was easy to rectify, thanks to another REST-savvy service offered by the W3C (World Wide Web Consortium). The W3C provides a URL-accessible XSLT (Extensible Stylesheet Language Transformations) transformation service. You can use it to transform URL-accessible XML files using URL-accessible XSLT files.