The Open Compute Project Foundation recently announced results from Facebook's attempts to build an efficient data center at the lowest possible cost. The foundation claims to have reduced the cost of building a data center by 24 percent and improved ongoing efficiency by 38 percent versus state-of-the-art data centers.
Those numbers are impressive, but don't expect to be able to deploy the foundation's technology in your data center just yet.
[ In the data center today, the action is in the private cloud. InfoWorld's experts take you through what you need to know to do it right in our "Private Cloud Deep Dive" PDF special report. | Follow the latest in open source developments and thinking with InfoWorld's Technology: Open Source newsletter. ]
Bringing deep data center engineering skills to the masses
By releasing Facebook's cost savings and, more important, the underlying hardware specifications for the motherboards, power supply, and chassis, the foundation hopes to bring efficiency and lower-cost data centers to companies that don't have the engineering depth of Facebook, Google, or Amazon.com.
Facebook deserves kudos for its work on the project. Getting together a board of directors that includes Andy Bechtolsheim from Arista Networks, Don Duet from Goldman Sachs, Mark Roenigk from Rackspace, and Jason Waxman from Intel couldn't have been easy. But the cost reduction and efficiency figures upward of 20 percent must have attracted attention from prospective board members and the long list of hardware, software, and institutional partners, including Dell, Intel, Huawei, Red Hat, Netflix, and North Carolina Sate University.
The Open Compute Project released its design specifications for servers and data center technology earlier this week. The servers themselves fit into a form factor that is slightly taller than a 1.5U standard server chassis. The servers can use either Intel or AMD motherboards. The version 2.0 Intel specification provides double the compute density as version 1.0 by using two Intel Core "Sandy Bridge" processors per board. The version 2.0 AMD specification also doubles the compute density with support for two AMD Opteron "Magny-Cours" or "Interlagos" processors per board.
Open Compute servers are housed in three adjoining 42U racks, dubbed "triplets." Each rack column contains 30 Open Compute Project servers, for a total of 90 servers per triplet. Additionally, each rack column has two tops of rack switches. A battery-pack rack cabinet sits between a pair of triplets to provide DC power in the event of loss of AC power.
A specification without a certification
At the Open Compute Project Summit this week, Andy Bechtolsheim said, "Open Compute Foundation is not a marketing org. There's nothing to sell here." But there is -- not by the foundation itself but by hardware vendors. And therein lies the problem.
It's critical for hardware vendors to quickly release Open Compute Project-certified hardware to capitalize on the interest generated by the Facebook data center effort. But there isn't a certification process for the hardware as yet. It's quite possible that the foundation's technology will get sold in products with varying degrees of adherence to its specifications.
As GigaOM reports, "when the effort launched in April, Dell and Hewlett-Packard both showed off servers that incorporated some of the elements of Open Compute." The term "some elements" should be worrisome both to the Open Compute Project and to potential buyers. Otherwise "Open Compute Project-based" hardware will proliferate without any standard comparison across offerings as vendors rush to take advantage of the project's buzz with existing inventory under a new marketing banner.
The first products are already here: Silicon Mechanics, a rack-mount server manufacturer and member of the Open Compute Project, announced an Open Compute triplet based on the specifications. The price of a 90-compute-node triplet with entry-level processors, RAM, and disk but no operating system or software starts at $287,755 and can grow to more than $2 million.
Until there's certification, buyer beware
I encourage you to read through the design specification and compare it to your current or future data center plans. However, until the Open Compute Project comes out with a certification process, I urge you to ask vendors which parts of the product align with the specifications and which parts do not. For now at least, it's "buyer beware" when it comes to products claiming to offer Open Compute Project-based products.
I should state: "The postings on this site are my own and don't necessarily represent IBM's positions, strategies, or opinions.
This article, "From Facebook's data center to yours: 'Open source' hardware," was originally published at InfoWorld.com. Read more of Savio Rodrigues's Open Sources blog and follow the latest developments in open source at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.