Cooling a datacenter to 68 degrees may be going out of style, APC power and cooling expert Jim Simonelli says.
Servers, storage, and networking gear are often certified to run in temperatures exceeding 100 degrees, and with that in mind, many IT pros are becoming less stringent in setting temperature limits.
[ InfoWorld's Ted Samson has tips on how to safely crank up the datacenter heat and more in his Sustainable IT blog. | Keep up on the day's tech news headlines with InfoWorld's Today's Headlines: Wrap Up newsletter and InfoWorld Daily podcast. ]
Servers and other equipment "can run much hotter than people allow," Simonelli, the CTO at the Schneider Electric-owned APC, said in a recent interview. "Many big datacenter operators are experienced with running datacenters at close to 90 degrees [and with more humidity than is typically allowed]. That's a big difference from 68."
Simonelli's point isn't exactly new. Google, which runs some of the country's largest datacenters, published research two years ago that found temperatures exceeding 100 degrees may not harm disk drives.
But new economic pressures are helping datacenter professionals realize the benefits of turning up the thermostat, Simonelli says. People are starting to realize they could save up to 50 percent of their energy budget just by changing the set point from 68 to 80 degrees, he says.
Going forward, "I think the words 'precision cooling' are going to take on a different meaning," Simonelli says. "You're going to see hotter datacenters than you've ever seen before. You're going to see more humid datacenters than you've ever seen before."
With technologies like virtualization increasingly placing redundancy into the software layer, the notion of hardware resiliency is starting to become less relevant, reducing the risk of overheating.
Server virtualization also imposes new power and cooling challenges, however, because hypervisors allow each server to utilize much greater percentages of CPU capacity. On one hand, server virtualization lets IT shops consolidate onto fewer servers, but the remaining machines end up doing more work and need a greater amount of cold air delivered to a smaller physical area.
If you're shutting off lots of servers, a datacenter has to be reconfigured to prevent cooling from being directed to empty space, Simonelli notes.
"The need to consider power and cooling alongside virtualization is becoming more and more important," he says. "If you just virtualize, but don't alter your infrastructure, you tend to be less efficient than you could be."