If Mark Twain were alive today, he might have popularized yet another saying for those of us in the computer industry: "Lies, damned lies, and benchmarks."
Benchmark results are often met with skepticism and controversy because of concerns that the results can be manipulated to make one vendor or platform appear better than another.
[ Also on InfoWorld: Dell, HP to certify and resell Oracle VM virtualization | Dell has also released a new virtualization-based secure browser, thanks to its KACE acquisition. ]
Yet when organizations are either implementing or evaluating virtualization platforms for their environments, they need a solution that helps them compare performance and scalability of these different platforms, make the right hardware choices, and then measure platform performance against some base level an ongoing basis. To do all of that, they need some sort of benchmark solution.
Unfortunately, the benchmark solutions that many IT shops use today in the physical world don't translate well into a virtual machine environment. These physical server benchmark solutions operate on the old idea of a single workload per server, and therefore don't have the proper understanding of server consolidation or simultaneously running multiple workloads on a single server.
VMware was the first to try its hand at addressing that challenge with the launch of its proprietary VMmark benchmark solution back in July 2007. Although extremely useful, it wasn't necessarily the easiest thing to use and earned a reputation for taking too long to run and being too onerous on its test bed hardware requirements.
In addition, it was surrounded by a bit of controversy around the publishing and then the challenging of vendor and competitor benchmark results. Specifically, VMware and Citrix have had numerous public debates around published results and benchmark scores, challenging validity because of issues such as configuration or platform version used in the testing.
In October 2006, the nonprofit group Standard Performance Evaluation Corp. (SPEC) formed a working group in order to develop a new vendor-neutral standard benchmark for measuring virtualization performance. SPEC set out to measure the performance of data center servers used for virtualized server consolidation and to include options for measuring power consumption and power/performance relationships. It took nearly four years in the making, but SPEC finally released its vendor-neutral virtualization benchmark solution, dubbed SPECvirt_sc2010, in July 2010 -- three years after VMware's VMmark hit the scene.
SPEC's virtualization benchmark was developed by the SPEC virtualization subcommittee, whose members and contributors included AMD, Dell, Fujitsu, HP, IBM, Intel, Oracle, Red Hat, Unisys, and VMware. Conspicuously missing from that list were the hypervisor platform vendors Citrix, Microsoft, and Parallels. While these three vendors are all members of the SPEC Consortium, they do not appear to have participated as subcommittee members or contributors to the benchmark according to the press announcement made by SPEC. If these companies did participate, why weren't they mentioned? And are they throwing support behind this new SPEC virtualization benchmark?
And if they didn't participate in the process, the question becomes, why not? If these three major hypervisor vendors had no say in the SPEC benchmark, can it really be trusted as vendor-neutral when comparing Hyper-V, XenServer, and Parallels hypervisors?