The phrase "open source hardware" sounds silly. Open source software tends to come in a free version, but hardware, not so much. Yet the folks at the Open Compute Project, led by Facebook, insist on the phrase.
What they really mean is that the hardware designs, including full specs and CAD files, are open source. So why would the Open Compute Project create intellectual property of significant value and give it away?
For one thing, because Facebook wants hardware manufacturers to build servers, storage systems, racks, and other equipment using those plans. In running "tens of thousands" of servers (the company won't give a more specific number) in its data centers, Facebook has discovered that standard commercial equipment isn't as cost- or power-efficient as it could be.
At first, I somewhat cynically assumed Open Compute was really about Facebook exercising its huge power as a customer. You want our business? Then beyond volume discounts, we want to cut deeper into your already slim margins and have you compete to create even cheaper, stripped-down white boxes.
But there's more to it. When I visited Facebook's Menlo Park offices recently, Frank Frankovsky, director of hardware design and supply chain for Facebook, convinced me that huge data centers running "at scale" truly have different design requirements for hardware. Not surprisingly, power management is the dominant concern.
By eliminating what Frankovsky calls the "vanity features" of servers and improving their power supplies and power management, Facebook is making dramatic reductions in the cost, footprint, and power consumption of its data centers.
Frankovsky claimed he would be willing to pay more for a design that suited his needs, in part because he'd get that investment back in power savings. On one side, Open Compute wants to get big customers to agree on requirements for at-scale equipment; on the other, vendors of motherboards, enclosures, power supplies, and so on often have arbitrary differences in design that can be resolved easily. When you run a single application across many, many servers, it makes sense to have a homogenous infrastructure.
I can't help but think of it as building a mainframe out of Legos.