Do benchmarks really measure a browser's speed?

Browser makers love to tout benchmark scores, but the tests don't always provide the best real-world performance picture

InfoWorld's Peter Wayner has published his head-to-head Web browser battle roundup, comparing Google Chrome 10, Mozilla Firefox 4, Microsoft Internet Explorer 9, Apple Safari 5, and Opera 11.1. Given that browser speed is a crucial part of the battle -- browser makers like to tout that their latest release is the fastest on the market -- Wayner's speed tests are of particular interest.

But browser speed tests are also problematic and don't always relate much to an average user's activities.

SunSpider and V8 are two widely used benchmarks. They both measure JavaScript performance, but they do so in different ways, leading to divergent results. Wayner found that the SunSpider test, for example, has all the browsers clustered in terms of performance, with Internet Explorer 9 taking the speed crown in a narrow win. The V8 test, however, has Chrome winning a decisive victory, performing twice as well as the second-place contestant, Firefox. But V8 is Google's own test, so it should surprise nobody that Google's browser excels at a test that Google designed.

The differing results raise methodology questions, but the deeper problem with benchmarks is that they don't always reflect typical real-world browser usage. In fact, a recent research paper examined the testing processes of SunSpider and V8 (PDF) and found that "benchmarks are not representative of many real websites and that conclusions reached from measuring the benchmarks may be misleading."

The biggest flaw with benchmarking, according to the researchers, is that "JavaScript benchmarks are small, and behave in ways that are significantly different" from commonly used Web applications like Facebook and Gmail. What's more, the research found that benchmarks often don't account for the kinds of JavaScript functions that are common, including those behind search engines like Google and Bing.

In other words, great benchmark scores make for great PR copy, but they don't necessarily mean anything for users right now. Rather, JavaScript benchmarking is akin to HTML5 compatibility: It's about where the Web is going, not where it is today.

Presently, apps that require the kind of JavaScript processing horsepower that benchmarks like SunSpider and V8 measure simply don't exist. But as cloud computing takes off and browsers start to take on heavier-duty computing tasks, JavaScript performance may become a meaningful differentiator. Regardless of who wins, the race suggests that tomorrow's Web may be radically different from today's.

This story, "Do benchmarks really measure a browser's speed?," was originally published at InfoWorld.com. Get the first word on what the important tech news really means with the InfoWorld Tech Watch blog. For the latest developments in business technology news, follow InfoWorld.com on Twitter.

Recommended
Join the discussion
Be the first to comment on this article. Our Commenting Policies