Free smart cooling? Play an ACE

Datacenters already have the tools in place to make cooling more intelligent and less expensive

In the perfect datacenter, each and every piece of IT hardware would receive the precise amount of cooling necessary to prevent overheating, thus saving you some cash on your utility bill, some wattage on your power budget, and some carbon emissions on your environmental reports. Then again, in the perfect datacenter, the network would never go down, storage capacity would be limitless, servers would self-virtualize, and the break room would always be stocked with hot pizza, cold beer, and assorted pints of premium ice cream. (OK, so maybe I'm thinking of my perfect datacenter.)

However, we live in the real world where the pizza gets cold while datacenter admins have to put out (preferably figurative) fires, and datacenter operators waste precious electricity and thousands of dollars -- if not tens or hundreds of thousands of dollars -- creating unnecessarily chilly meat-locker-like conditions in their datacenters. Sure, tools do exist for better regulating temperature on a rack-by-rack basis, such as sophisticated sensor-based offerings from companies such as HP and SynapSense. However, not all datacenter operators have the budget or the level of need to justify investing in that sort of additional technology hardware.

[ Have you adjusted your datacenter's temperature to meet ASHARE's latest recommendations? | Learn how Intel pushed the limits of server cooling to 90 degrees. ]

Fortunately, datacenter operators may have just about everything they need to automatically optimize cooling in real time using the IT and cooling equipment they already own. Such is the outcome of a recent project by Intel, IBM, HP, Emerson, and Lawrence Berkeley National Labs called Advanced Cooling Environment (ACE). Using existing sensor technology built into the servers, the organizations devised a way for servers to communicate their cooling needs on a granular basis to existing CRAHs (computer-room air handlers) to automatically adjust their output.

The test environment at one of Intel's datacenters employed a hot aisle/cold aisle configuration, which is viewed as an efficient way to set up server racks. In this configuration, server racks are lined up front to front and back to back. Cold air is blown only toward the front of the server where it is drawn in via the inlet; hot air is then released out the back for recirculation. This ensures that machines don't draw in the hot air expelled by their neighbors across the aisle.

[  Learn four more ways to reduce datacenter cooling costs. ]

For the ACE project, the team linked data from the server's front-panel temperature sensors (already a standard on most servers and a requirement for Energy Star servers) to the control systems of the air handlers via standard datacenter management communication protocols. Currently, servers and CRAC equipment use completely different protocols; servers employ, for example, HP's IPMI (Intelligent Platform Management Interface) or WS-MAN (Web Services Management), and cooling gear runs protocols such as Modbus.

[ The Energy Star requirement for servers is good -- for a first step. ]

With a translator in place between the two systems, explained David Jenkins, technology marketing manager in Intel's server group, "the CRAHs could dynamically adjust the speed of the fans and the temperature of the air to the requirements of the servers. The results: Servers received air at the appropriate temperature, power costs for cooling went down, and the energy efficiency of the datacenter went up."

The test environment was of limited size, but the project team determined that potential fan energy savings were as high as 90 percent for particular CRAHs in the test. That's not bad, considering that datacenters are known to spend as much as $1 to cool datacenters for every dollar they spend running IT gear.

Moreover, the project promises future energy-saving opportunities. Monitoring and measuring are integral to running an efficient datacenter operation, and it's all the better if you can accomplish those tasks with as few tools as possible. By giving server management tools direct access to a wealth of server performance and thermal health data, as well as a communication link to cooling equipment, you gain the ability to better understand and control what's happening in the datacenter. You can see which servers are underperforming and which machines aren't getting enough cold air; you can even develop a thermal map of your datacenter to pinpoint hot spots. Better still, all this becomes possible with many of the tools you already have in place -- which makes for a much sweeter ROI.

Learn more about the ACE project [PDF].

Copyright © 2009 IDG Communications, Inc.

InfoWorld Technology of the Year Awards 2023. Now open for entries!