Whatever you may think of HP as a company, it's hard to disagree with the vision it laid out last week with Project Moonshot, a program intended to "pave the way to the future of low-energy computing for emerging Web, cloud, and massive scale environments."
Moonshot's first volley is the Redstone Server: a tiny-footprint, energy-sipping wonder based on the ARM architecture that, if successful, would take data center efficiency to a new level. For the Redstone Server Development Platform, HP tapped Calxeda, a startup that used ARM's A9 chip to create the EnergyCore ARM Cortex "server on a chip." Calxeda claims each sever consumes a mere 5 watts of power on average -- 1.25 watts per each of its four cores.
Obviously, that metric doesn't include memory or connected devices. But according to HP, if you factor it out, Redstone servers would consume "89 percent less energy and 94 percent less space" than traditional servers.
This is exactly the direction in which the data center must go, regardless of which vendors supply the means. Power is a huge expense that outstrips the cost of the hardware itself (not to mention that reducing greenhouses gases just got more urgent than ever).
HP chose Canonical's Ubuntu Server as the operating system for Redstone. According to Canonical, Ubuntu Server was selected because it has a proven track record of running "at scale" across thousands of instances for public cloud services -- and it's the first operating system utilized for HP's public cloud service, now in beta. The public cloud, after all, is the de facto laboratory for the future data center.
But Redstone marks a major departure from the typical data center in another way -- one that might make it impractical for mainstream applications. Today, virtualization is the standard means to extract the last ounce of utilization from server infrastructure. Redstone takes an entirely different approach: Rather than slicing up servers to run on multiple virtual machines, Redstone eschews virtualization, with one instance of Ubuntu Server for each EnergyCore server on a chip.
That makes sense, since the EnergyCore doesn't have a whole lot of processing power to slice up. But right now for mainstream data centers, virtualization management provides the means to scale applications, move workloads around, and even cost out the compute resources consumed by those workloads. Administrators have grown accustomed to managing virtual machines; outside of academia few have the software or skills to managing workloads across thousands of little physical machines.
But Redstone is at an early phase. Who knows what software may emerge? The Ubuntu ARM Server project began only last month, with the objective, according to the project's Web page, of answering this question: "Do Linux X86 software loads work the same on ARM CPUs, or are there differences?" That may take a while to explore. And HP has not even given a rough timeframe for when Redstone servers may ship.
Whether or not Moonshot gets lost in space, I admire its ambition. The data center architecture of the future is all about many little machines -- physical or virtual -- running many workloads with the lowest possible power consumption and wasted capacity. Even the specs just released by the Open Compute Project, which essentially detail Facebook's state-of-the-art data center, outline relatively incremental improvements in power efficiency.
HP is talking about reducing power consumption by a magnitude. Redstone may or may not be the vehicle that gets us there, but the destination is right on target.
This article, "Little machines and the future of the data center," originally appeared at InfoWorld.com. Read more of Eric Knorr's Modernizing IT blog, and for the latest business technology news, follow InfoWorld on Twitter.