It's a pleasant afternoon in New Jersey (I'm pretty sure that's happened before). You're a datacenter operator walking the floor of your facility just outside Newark -- but it's eerily quiet, considering that fleets of servers are running all your mission-critical applications.
Once upon a time, the absence of humming might have meant something was catastrophically wrong. But a glance at your systems' management UI confirms that, indeed, your apps are being served. It's just that your servers in your India-based datacenter facility are doing most of the work -- and conveniently, it's off-peak hours there, which means you're paying a fraction of the energy costs. The fact that machines are self-adjusting their power consumption and virtual workloads -- even powering on and off as needed (or not) -- helps shrink the electric bill even more.
Energy efficiency, one of the key concepts under the umbrella of green technology, has become increasingly important to datacenter operators who are struggling with soaring electric and cooling bills, not to mention limited power from local utilities and space in existing datacenter facilities. From the chip level up to the software layer, vendors are devising ways to reduce the amount of juice given machines while wringing as much work out of each box as possible.
And based on what we're seeing, I expect the next big thing in green tech will be what I'll dub the dynamic server farm, a hybrid of power management, systems management, Web services, and virtualization technologies.
One piece of this puzzle: power management at the hardware level, which we're seeing crop up in systems management tools. Hewlett-Packard, for example, recently unveiled a new power-capping feature in its Systems Insight Manager hardware management platform. The capability lets admins control, on an individual machine level, how much power a given server consumes. As HP puts it, while a hardware vendor might tell you that your system needs 1,000 watts of juice, that's only if the machines are running at 100 percent capacity. But best practices in the datacenter dictate that you not run a machine at that speed; if you do, you're just begging for trouble should demand further increase.
Now, what happens if all your servers are running at a safe 80 percent utilization threshold and there's suddenly a spike in demand? You've got management software in place that will simply wake up other servers that were cozily and inexpensively snoozing since they weren't being called upon to act. Earlier this month, Appistry introduced that very technology, called EnergySaver, to its Enterprise Application Fabric platform.
In a similar vein, VMware has spoken to me about plans to incorporate power management features to its server virtualization product line. The way it would work is, it would dynamically transfer underutilized VMs to as few servers as possible during downtimes and power down the servers not in use. (Virtualization is, of course, another key component in this dynamic server farm, as it's the poster child for energy efficiency.)