Various initiative are underway exploring going, ahem, directly from utility power. "We've done a lot of demos worldwide about running data centers at 380vDC (volts of Direct Current) instead of 208vAC," says Symanski.
Moving to a direct current infrastructure says Symanski, "gets rid of three conversion steps in the electrical system, and also reduces the load on the air conditioning by the reduced amount of heat being created.
What's that mean in terms of dollar savings? "We've found in most of our demonstrations that we get about a 15 percent reduction in the power used to run IT equipment. Plus the savings from needing less air conditioning, which are probably comparable, but harder to measure.
Since a DC infrastructure means DC UPSs, DC circuit breakers, DC interconnect cables, et etc., data centers are unlikely to convert existing AC set-ups, other than as testbeds, says Symanski. "This is for when you are expanding in your data center, like adding a new row of racks, or building a new data center."
Switch to power-saving components
There are many opportunities to reduce power consumption simply by replacing some of the components in existing power and cooling systems.
i/O Data Centers, for example, "uses variable frequency chillers, pumps, cooling towers and air handlers to reduce energy consumption. By using only the power necessary to keep equipment running at optimal levels, i/o is able to operate energy-efficient data centers."
"You don't change the fan or the motor, you put a VSD on the motor. What used to be a single speed fan you can now slow down," notes EPRI's Symanski. "And by reducing the speed of a fan by 50 percent, with a variable-speed drive (VSD), you use only one-eighth of the power," However, Symanski cautions, "You have to make sure you don't get condensation and that the refrigerant doesn't freeze by slowing down too much."
There's even one easy component upgrade that can be done with some existing IT gear, Symanski points out: Replacing older power supplies with one of the new energy-efficient ones with certifications like 80PLUS and Energy Star.
"New power supplies may come in different versions -- Bronze, Silver, Gold and Platinum -- with correspondingly better efficiencies," Symanski notes. "Replacing an older power supply with a Platinum-level one can yield ten to fifteen percent energy savings, -- and the power supply is an inexpensive part."
Crazy like a fox, or just crazy?
So far, everything you've read is available and being done, or at least being explored in test conditions. But why stop there when there's still room for further improvement? Here are a few blue-sky ideas...
I take full credit and/or blame for this idea. Why not put servers inside turbine wheels, and drop them -- tethered by coax fiber -- into the water. The water motion on the turbine supplies power, the water movement keeps the server cool. For maximum heat exchange, (and to avoid buoyancy problems), use liquid-immersion cooling on the servers, like from Hardcore Computer. For extra credit -- being careful to put wire mesh screens around the servers -- farm salmon, clams and/or tilapia, since the water may be warmer than otherwise.
Speaking of location, with air-based power generation being developed, how about airborne data center modules, generating power and getting air-cooled without consuming ground footprint? (Granted, an easy target for air pirates armed with six-foot bolt cutters.) Or even larger ones in lighter-than-air dirigible housings?