ATTENTION DATACENTER STAFFERS: Utility computing is coming, but don't start planning your retirement just yet. In the utility computing dream, compute resources flow like electricity on demand from virtual utilities around the globe -- dynamically provisioned and scaled, self-healing, secure, always on, efficiently metered and priced on a pay-as-you-go basis, highly available and easy to manage. Using the latest clusters, grids, blades, fabrics, and other cool-sounding technologies, enterprises will plug in, turn on, outsource, and save big bucks on IT equipment and staff. They won't care where their J2EE (Java 2 Enterprise Edition) or Microsoft .Net resources live anymore.
In an era of server proliferation and underutilization as well as rising complexity and management costs, it sounds perfect. Heavyweights such as IBM, Hewlett-Packard, Sun Microsystems, and Compaq have all lined up behind it. C-level execs love the concept. There's just one problem -- the technology hasn't caught up to the vision. Here's why.
Reality #1: The technology's not there yet for external utilities
Just as ASPs and SSPs (storage service providers) got off to a slow start, so will the external computing utility. Recent announcements of so-called "utility" outsourcing deals (such as IBM's $4 billion deal with American Express) are just old wine in a new wineskin. We have a new pricing model, but we have the same old assets and on-site management services.
Instead, utility computing will start inside the firewall, enabling IT departments to offer utility-style services to business units, such as dynamic and scalable resource provisioning, allocation, monitoring, and per-unit billing. Intracompany utility services will start with individual server clusters and broaden to the datacenter and perhaps the campus as the point of control. Companies will use the dynamic provisioning software available today from vendors to speed deployment and cut management costs in the datacenter. It'll be like rolling out SANs (storage area networks) all over again, only this time providing a virtualization layer for servers.
Why not external utilities? Forget the fact that IT departments want to control their own infrastructure, or that ISVs don't want to sell pay-as-you-go licenses to outsourcers. The real issue is that the provisioning, scheduling, security, and policy-based management protocols needed to farm out compute jobs are barely on the drawing board, in the form of Web services and open standards grid computing proposals such as Globus's OGSA (Open Grid Services Architecture). It will be years before the infrastructure's there to get enterprises comfortable buying "utility" cycles for mission-critical apps.
Reality #2: Utility is hard to do in heterogeneous datacenters
In the utility dream, one virtualization layer handles all your compute, storage, and network resources, regardless of vendor and OS. There's one management system, with one console and a GUI where you can provision, track, and manage a new topology with just a few clicks. There's a workflow engine to enforce dependencies, and to enable back-out of compute jobs or deallocation of resources.