It's a pleasant afternoon in New Jersey (I'm pretty sure that's happened before). You're a datacenter operator walking the floor of your facility just outside Newark -- but it's eerily quiet, considering that fleets of servers are running all your mission-critical applications.
Energy efficiency, one of the key concepts under the umbrella of green technology, has become increasingly important to datacenter operators who are struggling with soaring electric and cooling bills, not to mention limited power from local utilities and space in existing datacenter facilities. From the chip level up to the software layer, vendors are devising ways to reduce the amount of juice given machines while wringing as much work out of each box as possible.
And based on what we're seeing, I expect the next big thing in green tech will be what I'll dub the dynamic server farm, a hybrid of power management, systems management, Web services, and virtualization technologies.
One piece of this puzzle: power management at the hardware level, which we're seeing crop up in systems management tools. Hewlett-Packard, for example, recently unveiled a new power-capping feature in its Systems Insight Manager hardware management platform. The capability lets admins control, on an individual machine level, how much power a given server consumes. As HP puts it, while a hardware vendor might tell you that your system needs 1,000 watts of juice, that's only if the machines are running at 100 percent capacity. But best practices in the datacenter dictate that you not run a machine at that speed; if you do, you're just begging for trouble should demand further increase.
Now, what happens if all your servers are running at a safe 80 percent utilization threshold and there's suddenly a spike in demand? You've got management software in place that will simply wake up other servers that were cozily and inexpensively snoozing since they weren't being called upon to act. Earlier this month, Appistry introduced that very technology, called EnergySaver, to its Enterprise Application Fabric platform.
In a similar vein, VMware has spoken to me about plans to incorporate power management features to its server virtualization product line. The way it would work is, it would dynamically transfer underutilized VMs to as few servers as possible during downtimes and power down the servers not in use. (Virtualization is, of course, another key component in this dynamic server farm, as it's the poster child for energy efficiency.)
So now we've got individual servers consuming only as much energy as they need to do their job. We've got servers waking up only when they're needed -- and going back to sleep when they're not. And we've got virtual machines in play, which in and of themselves yield more bang for your buck from hardware, only they're being dynamically moved among servers to ensure that as little power as possible is being used.
But what if those VMs could be dynamically moved beyond the confines of a given datacenter to, say, one in a different time zone -- based, at least in part, on both energy supply as well as energy costs at a given time? Your local utility sends out an alert, as a Web service, that there's a demand spike in your California datacenter, rates are about to double, and a brownout is imminent. But your dynamic server farm management platform knows that it's already after-hours in New York, where energy prices are lower and supply is ample, so it dynamically pushes the server load to your facility in Albany.
And if Albany, why not to your facility in Bangalore or Shanghai or wherever the energy prices are lowest -- as long as you can ensure sufficient service levels? (Your dynamic server farm platform would, of course, be measuring service levels continually.)
It's a pretty ambitious green technology vision, but if you think about it, many of the technology pieces are already out there, and just as important, so is the demand. Technologists often think of data as the lifeblood of IT, but data doesn't fuel your equipment -- energy does.