In the perfect datacenter, each and every piece of IT hardware would receive the precise amount of cooling necessary to prevent overheating, thus saving you some cash on your utility bill, some wattage on your power budget, and some carbon emissions on your environmental reports. Then again, in the perfect datacenter, the network would never go down, storage capacity would be limitless, servers would self-virtualize, and the break room would always be stocked with hot pizza, cold beer, and assorted pints of premium ice cream. (OK, so maybe I'm thinking of my perfect datacenter.)
However, we live in the real world where the pizza gets cold while datacenter admins have to put out (preferably figurative) fires, and datacenter operators waste precious electricity and thousands of dollars -- if not tens or hundreds of thousands of dollars -- creating unnecessarily chilly meat-locker-like conditions in their datacenters. Sure, tools do exist for better regulating temperature on a rack-by-rack basis, such as sophisticated sensor-based offerings from companies such as HP and SynapSense. However, not all datacenter operators have the budget or the level of need to justify investing in that sort of additional technology hardware.
Fortunately, datacenter operators may have just about everything they need to automatically optimize cooling in real time using the IT and cooling equipment they already own. Such is the outcome of a recent project by Intel, IBM, HP, Emerson, and Lawrence Berkeley National Labs called Advanced Cooling Environment (ACE). Using existing sensor technology built into the servers, the organizations devised a way for servers to communicate their cooling needs on a granular basis to existing CRAHs (computer-room air handlers) to automatically adjust their output.
The test environment at one of Intel's datacenters employed a hot aisle/cold aisle configuration, which is viewed as an efficient way to set up server racks. In this configuration, server racks are lined up front to front and back to back. Cold air is blown only toward the front of the server where it is drawn in via the inlet; hot air is then released out the back for recirculation. This ensures that machines don't draw in the hot air expelled by their neighbors across the aisle.