Facebook's Open Compute and the future of IT

Open Compute 'open source hardware' initiative is all about specs for stripped-down, power-efficient equipment underlying next phase of computing

The phrase "open source hardware" sounds silly. Open source software tends to come in a free version, but hardware, not so much. Yet the folks at the Open Compute Project, led by Facebook, insist on the phrase.

What they really mean is that the hardware designs, including full specs and CAD files, are open source. So why would the Open Compute Project create intellectual property of significant value and give it away?

[ Discover 8 data center lessons from Facebook in InfoWorld's slideshow. | Keep up on the day's tech news headlines with InfoWorld's Today's Headlines: Wrap Up newsletter. ]

For one thing, because Facebook wants hardware manufacturers to build servers, storage systems, racks, and other equipment using those plans. In running "tens of thousands" of servers (the company won't give a more specific number) in its data centers, Facebook has discovered that standard commercial equipment isn't as cost- or power-efficient as it could be.

At first, I somewhat cynically assumed Open Compute was really about Facebook exercising its huge power as a customer. You want our business? Then beyond volume discounts, we want to cut deeper into your already slim margins and have you compete to create even cheaper, stripped-down white boxes.

But there's more to it. When I visited Facebook's Menlo Park offices recently, Frank Frankovsky, director of hardware design and supply chain for Facebook, convinced me that huge data centers running "at scale" truly have different design requirements for hardware. Not surprisingly, power management is the dominant concern.

By eliminating what Frankovsky calls the "vanity features" of servers and improving their power supplies and power management, Facebook is making dramatic reductions in the cost, footprint, and power consumption of its data centers.

Frankovsky claimed he would be willing to pay more for a design that suited his needs, in part because he'd get that investment back in power savings. On one side, Open Compute wants to get big customers to agree on requirements for at-scale equipment; on the other, vendors of motherboards, enclosures, power supplies, and so on often have arbitrary differences in design that can be resolved easily. When you run a single application across many, many servers, it makes sense to have a homogenous infrastructure.

I can't help but think of it as building a mainframe out of Legos.

One chart in Frankovsky's presentation popped out: a stacked bar chart that showed the current proportion of data center infrastructure split about 50/50 between enterprise IT and "at scale" providers like Facebook. In some indeterminate year in the future, the chart shows enterprise IT slipping slightly and at-scale deployments doubling.

By Frankovsky's own admission, this was a made-up chart. But I get the general idea and I agree with it. The shift to deployments at scale is all about the shift to the public cloud, not just for social networks like Facebook, but for enterprise and consumer applications as well. It's a theme that, coming from a completely different place, was echoed by Oracle's Mark Hurd in my interview with him last week right after the announcement of the Oracle Public Cloud.

Recently I was chatting with InfoWorld's Paul Venezia about the march to the public cloud. He believes some gargantuan failure or other disastrous event will stop the trend in its tracks. Perhaps so. But at this point, although the pace of change with enterprise computing is slow, and there are plenty of exceptions and obstacles along the way, I can't see anything else to stop the migration skyward.

This article, "Does the 'private cloud' make sense?," originally appeared at InfoWorld.com. Read more of Eric Knorr's Modernizing IT blog, and for the latest business technology news, follow InfoWorld on Twitter.

Recommended
Join the discussion
Be the first to comment on this article. Our Commenting Policies