Both Mozilla and the Google are raising their rewards for submitted critical vulnerabilities in respective browsers. Mozilla is now paying $3,000 for Firefox bugs and the Google Chromium team is paying $3133.70 ("elite" in hacker leet-speak) for bugs in Chrome, compared to the initial $1,337 reward from six months ago. Ignoring Google's cheesy figure, it's a good time to ask again if paying for bugs makes the Internet any safer. I like the idea of paying bug finders for their work, but I'm doubtful it will protect users significantly in the long run. As a matter of fact, I'm pretty sure it won't.
Google's program itself is obviously successful, enriching bug reporters and helping Google better secure its browser. Google has reported 60 vulnerabilities so far this year alone: 25 from June 9 through July 6 for Chrome 5.x and 35 from January through May in Chrome 4.x. That's far more than those found in the other two major browsers: Microsoft's Internet Explorer 8 has 27 reported vulnerabilities this year and Mozilla Firefox 3.6 has 46.
[ Also on InfoWorld.com: Internet Explorer deemed least vulnerable browser | The Web browser is your portal to the world -- as well as the conduit that lets in many security threats. InfoWorld's expert contributors show you how to secure your Web browsers in this Web Browser Security Deep Dive PDF guide. ]
But does paying for bugs actually make end-users safer in the long run? Bug counts alone don't mean anything, even if Chrome bug severity is classified as a complete system compromise more often than that of the competition. Chrome doesn't have nearly the market share as Internet Explorer, so malicious hackers aren't targeting it. Does it really make users more secure to close holes that bad guys weren't looking for in the first place? Is it money well spent? I'm a security guy, so I'll always agree with fixing any security bug.
Is the browser more secure because more bugs are found and fixed? Yes, absolutely. The better question: Are more bugs being found and fixed more quickly than if Google didn't pay a bounty? I tend to think yes here, as well. A bug reward plugs into the whole concept of supply and demand. Certainly, more people are looking for bugs and reporting them to Google to reap the elite payment fees.
But it isn't all that simple -- Apple's Safari reported and fixed 66 bugs this year. Apple Safari 5.x has had 2 so far and Safari 4.x has had 64. Apple does not pay for bug reports, although the company is also frequently cited for being the slowest among major vendors to patch bugs.
Finding an increasing number of risks early on is probably inherent in any pay-per-bug (or secure development lifecycle) program, just because you have more eyes looking for vulnerabilities than before. Or could finding more and more bugs so quickly be an indication of overall shoddy code and point to systematic programming issues in the development process? In either case, you should normally see the number of bugs diminishing over time unless secure development lessons are not learned and feature creep and complexity keep rising.