With datacenter operators grappling with limited space and power, yet ever-increasing computing demands, hardware vendors are feeling the heat. They need to find ways to deliver hardware that can, in a nutshell, do more with less. One approach is to groom individual servers to be more energy efficient by, say, reducing the number of fans, installing a more energy-efficient power supply, streamlining the overall design, and so forth. Rackable Systems has taken a different approach: Rather than focusing on energy efficiency at the server level, the company is tackling the problem at the rack level.
The company recently introduced the CloudRack C2, a unified server cabinet, built for cluster computing applications, with some innovative tricks for maximizing power efficiency as well as cooling efficiency. Case in point: Rackable says the CloudRack C2 is thermally optimized to allow datacenters to operate to as high as 104 degrees Fahrenheit. Running a datacenter at that temperature would mean that you'd have to find a new place to safely store slabs of frozen meat. But it would also mean a significant drop in operating costs, thanks to reduced CRAC usage (that is, if the other gear in the datacenter can handle the more extreme temperature as well).
More with less
The power optimization of the CloudRack C2 starts at the tray level. Each tray is, effectively, an ultra-dense 1U server (1.75 inches high, 19 inches wide, and 31 inches in diameter), complete with standard components such as processor, board, and storage drives, along with features such as full IPMI 2.0 remote management and serial direct. Datacenter operators have flexibility here: The system supports a wide range of Intel and AMD processors, from Pico-ITX to EATX -- plus it's also designed to support the forthcoming Intel Nehalem Xeon. That means it's "future-ready" for next-gen upgrades.
What's particularly interesting, though, is what you won't find in each tray: no fans, no power supply, and no cover. The latter simply isn't necessary, according to Rackable. The extra sheet metal does little more than add weight to the system (which means higher costs and greater fuel consumption for shipping), and the open top offers easier access to server components. "Basically, we've removed the covers because we wanted to have an ecological design and eliminate wasteful pieces of sheet metal," said Saeed Atashie, director of server products at Rackable Systems.
As for the tray fans and power supplies, well, they're not necessary. The CloudRack C2 cabinet takes care of the cooling and power distribution for the trays. That translates to fewer moving parts; this, in turn, means greater energy efficiency, fewer rotational vibrations (which can cause reliability problems), and fewer points of failure. In fact, it equals zero moving parts at the server level if you opt for SSD (solid-state disk) drives.
Opening the cabinet
The cabinet, which comes in either a 23U or a 46U configuration, boasts what Rackable calls its Power XE power-distribution technology, which the company claims virtually eliminates the problem of stranded power in datacenters. "Stranded power" refers to power capacity that is paid for but ultimately unused by IT loads due to design or system configuration. For example, say 700 watts of power is allocated to a server that only consumes around 350 watts. Those extra 350 watts of stranded power represent wasted electricity that could be put to use elsewhere. The CloudRack C2 accomplishes this feat through greater-than-95 percent phase balancing.
Moreover, the CloudRack's Power XE technology improves power-delivery efficiency by converting incoming AC power to 99 percent efficient 12V DC power via hot-pluggable, N+1 redundant rectifiers, thus eliminating the need for the aforementioned server-level power supplies. "By replacing 40 power supplies in 40 trays with six rectifiers, we have reduced points of failure," said Atashie.
In the previous version of the CloudRack, AC power was distributed directly to the trays. This new approach of using rectifiers, according to Atashie, helps address the problem of phase imbalance and stranded power.
As for cooling, the cabinet has redundant, hot-swappable fan arrays down the back. The fans are autonomic: Rotation speeds vary automatically depending on ambient temperature. According to Rackable, cooling represents 8 percent of the CloudRack's overall power consumption compared to 25 percent found in competing blade server systems. "We're consuming up to 800 watts of power per cabinet for cooling. That's remarkable compared to what the competition is doing. A 42U cabinet sold by people like Dell consumes 5,000 to 5,300 watts of power for cooling. North of 25 percent of the entire power consumption goes to cooling fans -- not to mention the noise," said Atashie.
To reiterate Rackable's intriguing claim: The system is capable of running safely in an elevated temperature environment, as high as 104 degrees Fahrenheit, which sets a high bar for other server-hardware vendors. (For a point of comparison, ASHRAE [American Society of Heating, Refrigerating and Air-conditioning Engineers] recently said it was OK to run datacenters at as high as 80.6 degrees Fahrenheit.)
As with the trays, the cabinet is built using less metal than rival chassis: Rather than sliding out trays on rails, there are tabs on either side. Again, this eco-friendlier design means lower building and shipping costs.
One final point to raise: Rackable claims the CloudRack C2 is capable of delivering up to 32 cores per unit (1,280 cores per cabinet), or a remarkably high storage capacity of up to 8TB per unit. A high level of density is mighty appealing to a space-starved datacenter operator.
All in all, I like what Rackable Systems has done with the CloudRack C2, squeezing out greater energy efficiency while reducing cooling requirements, all of which translates to less wasted power. That means lower utility bills and fewer carbon emissions. On top of that, from an Earth-friendly perspective, I like how the company has reduced the amount of metals and other components in the system -- which also results in less fuel consumption for shipping (lower costs, fewer GHGs), not to mention fewer points of failure from an operational perspective. Better for the datacenter, better for bottom line, better for the planet. It's another fine example of a step forward for green IT.