Paradigm shifts were easier before the bubble burst. Serious change costs serious money, and few IT organizations have gobs of green stuff to throw around anymore. So it’s no surprise that utility computing -- hailed as the biggest paradigm shift since the first disk drive spun up -- has stalled.
It doesn’t help that the marketing geniuses who came up with the concept still can’t agree on what it means. There are three basic definitions.
Utility as an on-demand computing resource: Also called “adaptive computing,” depending on which analyst or vendor you talk to, on-demand computing allows companies to outsource significant portions of their datacenters, and even ratchet resource requirements up and down quickly and easily depending on need. For those of us with gray whiskers in our beards, it’s easiest to think of it as very smart, flexible hosting.
Utility as the organic datacenter: This is the pinnacle of utility computing and refers to a new architecture that employs a variety of technologies to enable datacenters to respond immediately to business needs, market changes, or customer requirements.
Datacenters not only respond immediately, but nearly effortlessly, requiring significantly less IT staff than traditional datacenter designs.
Utility as grid computing, virtualization, or smart clusters: This is just one example of a specific technology designed to enable the above definitions. Other technologies that will play here include utility storage, private high-speed WAN connections, local CPU interconnect technologies (such as InfiniBand), blade servers, and more.
These three descriptions are different enough to seem unrelated, but in fact they’re dependent on each other for survival. Should utility computing ever live up to its name -- a resource you plug in to, as you would the electric power grid -- then that resource needs to be distributed, self-managing, and virtualized. Whether that grand vision will ever be realized is an open question, but at leaast some of the enabling technologies are already here or on the horizon.
The on-demand adaptive buzzword enterprise
The on-demand version of utility computing is the one closest to fruition. Vendors such as Dell, EMC, Hewlett-Packard, IBM, and Sun have been selling it for some time. This year Sun Microsystems has been the noisiest of the bunch, recently announcing that it wants to be the electric company of off-site computing cycles.
“Sun has decided to take utility to a whole new level,” says Aisling MacRunnels, Sun Microsystems’ vice president of marketing for utility computing. “We’re building the Sun Grid to be easy to use, scalable, and governed by metered pricing. We’re also incorporating a multitenant model that allows us to provide a different scale of economy by pushing spare CPU cycles to other customers.”
The Sun Grid is comprised of several regional computing centers (six throughout the United States, so far), each running an increasing number of computing clusters based on Sun’s N1 Grid technology. Sun allowed us to visit its Secaucus, N.J., Regional Center, which is supplying Grid resources to several Wall Street customers for complex financial modeling. Sun’s centers boast rack after rack of 32-node compute clusters based on its SunFire V20z Opteron-based servers, interconnected using InfiniBand (via Topspin), and managed from master consoles located at central corporate sites rather than at the Regional Centers themselves.