Vendor benchmarks: Buyers beware

Recent SPEC benchmark results show EMC's new storage array trouncing the competition, but it's tough to gauge what pushed its wares to the front of the pack

Current Job Listings

Third-party benchmarks from organizations such as SPEC are a great tool for companies shopping around for new IT gear -- in theory. They could provide an unbiased, apples-to-apples comparison of results from identical tests to determine which server or storage array is the best for particular workloads. Not surprisingly, though, the vendors typically don't use them that way.

For example, a casual glance at recently published SPEC benchmark results would suggest that EMC's new VNX storage array is so darn fast that competitors such as NetApp and HP may as well pack up their file servers and go home. But closer scrutiny reveals that although EMC may indeed be pushing the envelope on file-server speeds, companies in the market for new storage gear still have comparison shopping to do.

What's behind the numbers?

According to recent SPECsfs2008_cifs results comparing the performance of storage systems running a CIFS workload, EMC's VNX storage array achieved throughput of 661,951 ops (operations per second) with an ORT (overall response time) of 0.81 millisecond. The next-best results on the list from a non-EMC competitor came from NetApp: Its FAS3140 system, running FCAL disks, managed a throughput of 55,476 ops and an ORT of 1.84 ms. Given only those figures, one might assume that EMC come up with some extraordinarily potent secret storage sauce that will ease Big Data pains worldwide.

But closer inspection suggests there's more than uber EMC technology at play: For starters, EMC ran its test on a VNX VG8 Gateway/EMC VNX5700, with five X-Blades (including one standby). The configuration is composed of 581 disks, 240GB of memory, and a 10GBE network.

By contrast, no other non-EMC array on the list was tested on a 10GBE network; rather, most ran on a variant of 1 GBE. Additionally, the aforementioned NetApp system was configured with 224 disks (less than half the number of the EMC setup) and 9GB of memory.

Thus, the only clear takeaway from these results is that a system with lots of disks and memory running on a network with obese pipes is faster than a system with a modest amount of disks and memory running on a more traditional network.

Appleish-to-appleish comparisons

The SPECsfs2008_nfs.v3 results, which compare performance running NFSv3 workloads, provide slightly more comparable test beds. Still, it's not quite an apples-to-apples comparison.

EMC, HP, and NetApp all listed systems tested on a 10GBE network. EMC's aforementioned VG8 Gateway/EMC VNX5700 combo, with 457 disks and 240GB of memory, achieved a throughput of 497,623 ops with an ORT of 0.96 ms.

By comparison, HP's best score came from its BL860c i2 4-node HA-NFS cluster, loaded with a 1,480 disks and 800GB of memory. That arrangement yielded a throughput of 333,574 ops and an ORT of 1.68 ms.

NetApp, meanwhile, fared best its FAS6240, equipped with 288 disks and 1,128GB of memory. That system managed a throughput of 190,675 ops at an ORT of 1.17 ms.

In this contest, it's tough to attribute EMC's results to an abundance of extra disks, memory, and fat pipes. HP brought far more of each to the party and still fell short compared to EMC. NetApp's entry had fewer disks -- but far more memory -- and its overall scores were considerably lower than EMC's.

Even with somewhat closer system configurations, the picture isn't fully clear. For example, one would certainly want to know how much money each of these setups would cost to purchase and operate, how they might run on a 1GBE network, what features they respectively bring to the table, and so on. Still, these benchmark results might help an IT shop come up with a short -- or shorter -- list of potential candidates for consideration.

Vendors can and will game benchmarks and cherry-pick results. There's no way around it. Still, benchmarks such as SPEC do provide some level of transparency -- particularly compared to secretive "in-house tests" or benchmarks performed by suspicious third parties -- not to mention side-by-side (though not necessarily apples-to-apples) comparisons among competing products. IT buyers just need to remain vigilant when digesting benchmark results.

This story, "Vendor benchmarks: Buyers beware," was originally published at Get the first word on what the important tech news really means with the InfoWorld Tech Watch blog. For the latest developments in business technology news, follow on Twitter.