When David Young told his colocation provider late last year that his online applications startup, Joyent, planned to add 10 servers to its 150-system datacenter, he received a rude awakening. The local power utility in Southern California wouldn’t be able to provide the additional electricity needed. Joyent’s upgrade would have to wait.
“We had to find creative ways to get through this period,” says Young, whose urgent need for more computing bandwidth forced him to contract with a second colocation provider.
Tales such as Young’s have become increasingly common during the past few years. The cost and availability of electricity is emerging as a key concern for IT managers when building datacenters, in many cases trumping such traditional considerations as seismic stability, purchase price, and quality of life for employees.
Google, for example, has watched its energy consumption almost double during the past three generations of upgrades to its sprawling computing infrastructure. It recently unveiled a major new datacenter site in a remote part of Oregon, where power costs are a fraction of those at Google’s home base in Silicon Valley. But cheap power may not be enough. Last year, Google engineer Luiz André Barroso predicted that energy costs would dwarf equipment costs — “possibly by a large margin” — if power-hungry datacenters didn’t mend their ways. Barroso went on to warn that datacenters’ growing appetite for power “could have serious consequences for the overall affordability of computing, not to mention the overall health of the planet.”
Keeping cool in a crisis
IDC analyst Michelle Bailey says U.S. companies spent approximately $5.8 billion powering servers in 2005 and another $3.5 billion or more keeping them cool. That compares with approximately $20.5 billion spent purchasing the equipment.
“It’s a big problem,” Bailey says of the skyrocketing energy bills. “Over the lifecycle of the system, actually powering and cooling the system starts to become almost equal to the price.”
Rather than any single, readily fixed cause, the current IT power crisis is the result of a combination of subtle trends. At its core is what Jerald Murphy, COO and director of research operations at Robert Frances Group, refers to as the “dark underbelly” of Moore’s Law: As processor performance has doubled every couple of years or so, so too has power consumption and its side effect, heat.
That wasn’t a problem decades ago, when the latest and greatest chip consumed 8 watts, instead of the 4 watts of its predecessor. But as power requirements slowly grew over time, things changed, until we reached a tipping point of sorts in the past two or three years. Today’s chips require anywhere from 90 to 110 watts — twice as much power as the chips of just a couple of years ago. They also run hotter, which drives up the cost of datacenter cooling. And if that wasn’t enough, the growing use of blade servers — once viewed as a panacea to power and space limitations — is only making things worse.