Like the unfortunate person who continually diets but only seems to gain more weight, power-hungry data centers -- despite adopting virtualization and power management techniques -- only seem to be consuming more energy than ever, to judge from some of the talks at the Uptime Symposium 2010, held last week in New York.
"There is a freight train coming that most people do not see, and it is that you are going to run out of power and you will not be able to keep your data center cool enough," Rob Bernard, the chief environmental strategist for Microsoft, told attendees at the conference.
[ EBay's chief of data centers makes the case that IT should pay the power bill. | Keep up on the day's tech news headlines with InfoWorld's Today's Headlines: First Look newsletter. | Keep up on green IT trends with InfoWorld's Sustainable IT blog and Green Tech newsletter. ]
Power usage is not a new issue, of course. In 2006, the U.S. Department of Energy predicted that data center energy consumption would double by 2011 to more than 120 billion kilowatt-hours (kWh). This prediction seems to be playing out: An ongoing survey from the Uptime Institute found that, from 2005 to 2008, the electricity usage of its members' data centers grew at an average of about 11 percent a year.
But despite all the talk in green computing, data centers don't seem to be getting more power-efficient. In fact, they seem to be getting worse.
"We haven't fundamentally changed the way we do things. We've done a lot of great stuff at the infrastructure level, but we haven't changed our behavior," Bernard said.
Speakers at the conference pointed to a number of different power-sucking culprits, including energy-indifferent application programming, siloed organizational structures, and, ironically, better hardware.
One part of the problem is the way applications are developed. "Applications are architected in the old paradigm," Bernard said. Developers routinely build programs that allocate too much memory and hold on to the processor for too long. A single program that isn't written to go into sleep mode when not in use will drive up power consumption for the entire server.
"The application isn't energy-aware, it doesn't matter that every other application on the client is," he said. That one application will prevent the computer from going into a power-saving sleep mode.
The relentless pace of processor improvement is another culprit, at least if the data center manager doesn't handle it correctly. Thanks to the still-unrelenting pace of Moore's Law, in which the number of transistors on new chips doubles every two years or so, each new generation of processors can double the performance of its predecessors.