Getting HTTP right

Applications that respect the Web protocol's rules will reap its rewards

Last month, when I discussed the proper use of the HTTP verbs POST and GET, the benefits and hazards seemed abstract. Recently, though, two compellingly concrete examples emerged. The first involved a collision between Google’s new Web Accelerator and an application called Backpack, which is built with Ruby on Rails, a Web application framework for the Ruby programming language. This was an unfortunate but timely demonstration of what can go wrong when HTTP-based software fails to distinguish between requests that alter resources and requests that do not.

The second example involved Coral, an open content-distribution network, and underscored what can go right when the rules of the road are respected. HTTP’s proxying and caching features do much more than we usually imagine.

Let’s consider the Google/Ruby dustup first. Rails-based applications can emit hyperlinks that, when clicked, issue GET-style requests that update rather than merely fetch. When Google’s Accelerator robotically follows hyperlinks in order to prefetch content and cache it locally, unexpected changes occur, data gets lost, and things get ugly.

In my previous column on HTTP hygiene I argued that some client-side HTTP toolkits make programmers work harder to issue POST requests than to issue GETs. Although that’s true, several bloggers noted that most of the burden falls on server-side application frameworks. Modeling applications in ways that respect the difference between reading and updating resources cuts across the grain of most, if not all, of the popular Web frameworks. The good news: The Ruby on Rails folks have pledged to start addressing this in a forthcoming release, and we hope others will too.

But what if we could fearlessly rely on the full power of HTTP? I got a glimpse of what that might be like when I tapped into Coral, an experimental alternative to content-distribution networks such as Akamai. The impetus was a Greasemonkey script I’d written to annotate Web pages with links to citations in del.icio.us and Bloglines. It’s incredibly handy, but I had to throttle back my use of it. Hammering del.icio.us and Bloglines on every page I visit would have been abusive.

Then I found Coral, adapted my script to use it, and opened the throttle again. The technique is marvelously simple: You just append “.nyud.net:8090” to any URL; so InfoWorld’s home page becomes infoworld.com.nyud.net:8090. Behind the scenes, Coral runs a decentralized network of DNS redirectors and caching HTTP proxies in order to protect and assist origin servers.

The current network, with only a few hundred nodes, tends to respond more slowly than origin servers do. (Because my script fetches data asynchronously, I can live with that.) But think how much speed -- and how many nines of reliability -- could result from a deployment of Coral approaching the scale of BitTorrent.

Of course, if intermediaries can’t distinguish between fetch requests and update requests, we’ll wind up with an Internet-scale ugly mess. But if applications play by the rules, they could leverage such a network to deliver content and even services with speed and reliability beyond their normal means. Peer-to-peer isn’t the hot topic it once was. But it’s still percolating, creating a world of opportunity for well-behaved HTTP applications.

Copyright © 2005 IDG Communications, Inc.

How to choose a low-code development platform