68-degree datacenters becoming a thing of the past, APC says

Some CTOs are letting datacenters run at closer to 90 degrees as a cost-cutting measure

Cooling a datacenter to 68 degrees may be going out of style, APC power and cooling expert Jim Simonelli says.

Servers, storage, and networking gear are often certified to run in temperatures exceeding 100 degrees, and with that in mind, many IT pros are becoming less stringent in setting temperature limits.

[ InfoWorld's Ted Samson has tips on how to safely crank up the datacenter heat and more in his Sustainable IT blog. | Keep up on the day's tech news headlines with InfoWorld's Today's Headlines: Wrap Up newsletter and InfoWorld Daily podcast. ]

Servers and other equipment "can run much hotter than people allow," Simonelli, the CTO at the Schneider Electric-owned APC, said in a recent interview. "Many big datacenter operators are experienced with running datacenters at close to 90 degrees [and with more humidity than is typically allowed]. That's a big difference from 68."

Simonelli's point isn't exactly new. Google, which runs some of the country's largest datacenters, published research two years ago that found temperatures exceeding 100 degrees may not harm disk drives.

But new economic pressures are helping datacenter professionals realize the benefits of turning up the thermostat, Simonelli says. People are starting to realize they could save up to 50 percent of their energy budget just by changing the set point from 68 to 80 degrees, he says.

Going forward, "I think the words 'precision cooling' are going to take on a different meaning," Simonelli says. "You're going to see hotter datacenters than you've ever seen before. You're going to see more humid datacenters than you've ever seen before."

With technologies like virtualization increasingly placing redundancy into the software layer, the notion of hardware resiliency is starting to become less relevant, reducing the risk of overheating.

Server virtualization also imposes new power and cooling challenges, however, because hypervisors allow each server to utilize much greater percentages of CPU capacity. On one hand, server virtualization lets IT shops consolidate onto fewer servers, but the remaining machines end up doing more work and need a greater amount of cold air delivered to a smaller physical area.

If you're shutting off lots of servers, a datacenter has to be reconfigured to prevent cooling from being directed to empty space, Simonelli notes.

"The need to consider power and cooling alongside virtualization is becoming more and more important," he says. "If you just virtualize, but don't alter your infrastructure, you tend to be less efficient than you could be."

Enterprises need monitoring tools to understand how power needs change as virtual servers move from one physical host to another. Before virtualization, a critical application might sit on a certain server in a certain rack, with two dedicated power feeds, Simonelli notes. With live migration tools, a VM could move from a server with fully redundant power and cooling supplies to a server with something less than that, so visibility into power and cooling is more important than ever. The ability to move virtual machines at will means "that technology is becoming disconnected from where you have appropriate power and cooling capacity," Simonelli says.

To support the high densities introduced by virtualization and other technologies such as blade servers, cooling must be brought close to the rack and server, Simonelli says. As it stands, cooling is already the biggest energy hog in the datacenter, with power wasted because of over-sized AC systems and temperatures set too low, he says.

While every datacenter has different needs, Simonelli says enterprises can learn something from the giant SuperNAP co-location datacenter in Las Vegas, a 407,000 square-foot building that relies heavily on APC equipment, such as NetShelter SX racks, thousands of rack-mounted power distribution units, and UPS power supplies.

While the site can support 60 megawatts, it's being built out in 20-megawatt chunks. "That means they can maximize the energy consumption, the efficiency of the datacenter as they scale. They're not powering the 60-megawatt site right away," Simonelli says.

One of the biggest mistakes is to over-size power capacity, in anticipation of future growth that may never come. Companies have to plan for where they think they will be a few years from now, but build out in smaller increments, he says.

"You have to have the floor space, and you have to have the capability of getting power from the utility," Simonelli says. "But if you're going to build out a one- or a five-megawatt datacenter, and you know that your first year of deployment is only going to be 100 kilowatts, get the space and make sure you have power from the utility for five megawatts but just build it out in 250 or 500 kilowatt chunks."

This story, "68-degree datacenters becoming a thing of the past, APC says" was originally published by NetworkWorld.

Mobile Security Insider: iOS vs. Android vs. BlackBerry vs. Windows Phone
Recommended
Join the discussion
Be the first to comment on this article. Our Commenting Policies