By embracing some fairly basic practices, the average data center operator can reduce his or her facility's PUE to 1.5. That observation came from Google Green Energy Czar Bill Weihl as he spoke about his company's strategies for cutting energy consumption at today's GreenNet 2010 conference in San Francisco.
The most significant gains, Weihl said, can be achieved by reducing the overhead costs associated with running a data center, including cooling, power insfrastructure, and lighting.
[ Discover the green IT stars of 2010. | View a slideshow of green tech and gadgets for Earth Day. | Keep abreast of green IT news and tips by subscribing to InfoWorld's free weekly Green Tech newsletter. ]
The most interesting tip Weihl shared pertained to power infrastructure. Whereas most companies use large PDUs (power distribution units) to provide backup power for their data center hardware, Google instead equips each server with a 12-volt battery. Google claims this approach is far more efficient. A large UPS is around 92 to 95 percent efficient, whereas Google says batteries help it achieve better than 99.9 percent efficiency.
That's not Google's only server-related secret. The company has also revealed that it uses machines that are stripped of superfluous components, such as graphics cards and excess fans.
In terms of cooling, Weihl noted some of the more common techniques that have gained popularity. First, he said that data centers operators should create hot and cold aisles to prevent cool air from mixing with hot. Paying to chill air only to have it warmed up before it can do its job is, after all, a waste. Setting up aisles is the first step, but Weihl pointed out that air can still escape over racks and around corners of aisles. Google uses a fairly low-tech approach to prevent that mixing in at least one of its data centers: placing metal end caps at the end of rows and putting up plastic curtains, the kind you might find in a meat locker to keep cold air in while allowing easy access.
Additionally, Weihl urged data center admins to check the manufacturer-recommended inlet temperatures of their IT equipment and adjust the thermostat accordingly. (Data center admins at Google don shorts instead of warm clothing typically worn by IT staff in the average overchilled data center, Weihl said.)
On top of that, he suggested that companies, whenever possible, embrace free cooling -- such as cooling towers and outside air -- to supplement costly CRAC operations.
Beyond using energy-efficient technologies, Weihl talked up the importance of measuring and monitoring. For example, he said companies should keep track of the utilization levels of servers -- notoriously low in some data centers. "Provision what you need to, not ten times more than you need to," he said.
Power management capabilities, which have long been available on computers, are also becoming common on servers -- yet companies often turn off those capabilities. Don't do that, Weihl urged, likening it to ripping the electric motor out of your hybrid vehicle and using just the fuel engine.
Additionally, he noted that monitoring performance of both infrastructure and IT gear can help an organization track and address inefficiencies.
On a tangential note, Weihl reiterated that Google is not planning to enter the energy market, despite the fact that the company applied for an application to the FERC (Federal Energy Regulatory Commission) for the right to buy and sell energy on the wholesale market. The company made the move to save money on energy costs down the road, as well as to give Google more flexibility in purchasing clean energy, Weihl said. "We don't want to be the next Enron," he said. "We want to be able to sign contracts for renewables without throwing away lots of money."
This story, "GreenNet 2010: Google shares its green data center secrets," was originally published at InfoWorld.com. Follow the latest developments in green IT and read more of Ted Samson's Sustainable IT blog at InfoWorld.com.