Six ways Google makes it datacenters greener

From measuring religiously to using highly efficient servers, the search giant knows how to makes its datacenter more energy-efficient

Google doesn't just know search; the company also appears to have a firm grasp on sustainability in its datacenter.

Take, for example, the average PUE (Power Usage Effectiveness) score for the search giant's datacenters. PUE compares the overall amount of energy used in a datacenter for all functions -- including computing, cooling, and power distribution -- to the amount that just goes into computing. (For PUE, lower is better, and 1 is as good as you can get.) Google reports an average weighted PUE score of 1.21 for all six of its datacenters.

That score means that just 21 percent of the power Google uses in its datacenters goes toward overhead such as cooling and electrical losses; the remaining 79 percent is put to actual work by storage, networking, and server gear. By the EPA's account, 1.2 represents a score for a state-of-the-art datacenter. Google is clearly doing a thing or two right.

But how, you might wonder, does Google do it? Fortunately, the company is willing to reveal at least some of its datacenter best practices in a recently opened section of its corporate site called "Commitment to Sustainable Computing." Here are some techniques offered by Google for wringing efficiency out of its datacenters:

1. Measure, measure, measure. If you wonder how your own datacenter stands up to Google in terms of efficiency, you should be calculating your own facility's PUE score on a regular basis. After all, how else can you possibly know how well you're doing? Gathering the data can require a little footwork as you walk among the machines taking measurements every so often (unless you have a sophisticated monitoring system in place like, say, Microsoft) -- but it helps illustrate which practices are working and which aren't.

[ There are caveats when it comes to measuring and interpreting PUE. Learn more by reading "Don't believe the PUE hype." ]

2. Use efficient machines. Google says it rates "the efficiency of our servers by measuring the power used by each of the actual computing elements (such as processors and memory) against the power used by all other things (like fans and power conversion)."

Sounds rather like a PUE score for hardware, doesn't it? And it makes abundant sense: How else could you know how efficient your hardware really is if you don't measure?

Google seeks machines with highly efficient components such as power supplies. "Our servers only lose a little over 15 percent of the electricity they pull from the wall during these power conversion steps, less than half of what is lost in a 'typical' server. Similarly, our motherboards use very efficient voltage regulator modules, maximizing the amount of electricity delivered to the components that do work."

The company says the efficient power conversion saves it an estimated 500 kWh per server per year compared to a typical system.

Google also flexes its power as a mass buyer of hardware. "We encourage all of our suppliers to produce components that operate efficiently whether they are idle, operating at full capacity, or at lower usage levels," the company reports. "Our published studies indicate that more energy proportional systems could cut in half the total energy used by large datacenter operations."

3. Strip out the superfluous server parts. Google reports that it omits unnecessary server and rack parts from its systems, such as graphics cards -- and even excess fans. Those tweaks save on wattage. "Moreover the fans are controlled to spin only as fast as necessary to keep the server temperature below a threshold," Google reports.

Google equips its datacenters with cooling towers to inexpensively keep them at an optimal temperature. The approach (pictured below) essentially uses water evaporation to cool the facilities, which means the company doesn't need to turn on its energy-draining chillers as often. That's a big power saver, which translates to a big money saver. [ Learn about Intel's successful experiment with so-called free cooling by reading "Intel pushes the limits of free cooling to 90 degrees." ]

evaporativecooling.gif
4. Use free cooling.

5. Manage airflow. Among the other nuggets of advice Google offers for running a more efficient datacenter, the company suggests this: "Good airflow management is a fundamental to efficient datacenter operation. Start with minimizing hot and cold air mixing and eliminate hot spots."

[ Learn more about creating hot and cold aisles and other tips to make your datacenter more energy efficient by reading "Savor the fruit of others' green IT success." ]

6. Don't be afraid of a little heat. "Raising the cold aisle temperature will minimize chiller energy use. Don't try to run at 70 [degrees Fahrenheit] in the cold aisle, try to run at 80F; virtually all equipment manufacturers allow this," Google recommends.

[ Learn more techniques for keeping datacenters cool by reading "Beat the datacenter heat, cheap." ]

You can learn more about Google's sustainability practices at its corporate Web site.

Related:

Copyright © 2008 IDG Communications, Inc.

How to choose a low-code development platform