Vulnerability counts do matter

The myth that vulnerability metrics are completely useless is false -- and here's why

It happened again! I got into yet another argument…er…heated discussion over the security of Microsoft Windows versus some other operating system. Usually it starts with some reader's knee-jerk emotional reaction -- saying "Windows sucks!" or something like that.

[ RogerGrimes's column is now a blog! Get the latest IT security news from the Security Adviser blog. ]

When faced with a knee-jerker (aka "jerk"), I often point out that it's easy for any OS to be insecure if the admin doesn't follow best practices, but it's just as simple for me to secure almost most any OS if I follow basic security practices.

Normally, my critics counter that remark by saying that because Windows has so many more vulnerabilities than the average competing OS, it's easier to secure those other OSes. Unless you're running OpenBSD, this statement is usually untrue. All of those other OSes end up with a fair amount of published vulnerabilities that need to be patched.

These days it's easy for me to point to vulnerability counts as a metric of Microsoft's better job at security. (My favorite site for vulnerability statistics is Secunia.com, and their Software Inspector scanner is a good source.) For instance, IIS 5 had 14 announced holes. IIS 6, released almost four years ago in March 2003, has had three known holes, none popularly exploited. Apache Web server, IIS's nearest competitor, has had more than 33 vulnerabilities in the same time period.

How about ASP.Net versus PHP? Not even close: ASP.Net has had seven exploits, none popularly used, whereas PHP has had dozens of bugs that led to worm and spam bot takeovers on hundreds of thousands of Web servers. If you're tired of spam in your inbox, tell your friendly PHP coder to learn more about security.

But Internet Explorer is Microsoft's real weak link, right? Well, yes it is. IE 6 had 16 exploits announced in 2006. Firefox 1.x was released in 2006 to prove that the open source community could make a secure browser -- it had 13 announced vulnerabilities. 

Thirteen vulnerabilities with just 5 to 10 percent of the market share? Is that the product that's supposed to show how secure open source coding gets done?

Well, then the core Windows OS is ultra-insecure, right? Let's look at the numbers: Windows XP Pro had 45 announced holes in 2006, Mac OS X only had 24. That means that OS X is at least twice as secure as Windows.

Well, not exactly. Many of the Mac announcements close dozens of security holes at once. One OS X announcement at Secunia closes 31 Mac holes and another 15. If I count each announced vulnerability separately, OS X ends up with more than 100 holes in 2006, far surpassing the individual hole count I could find in Windows XP Pro. And that doesn't include the exploit-a-day announcements Apple faced last month.

If you look at vulnerability counts alone, Microsoft is improving in nearly every category over past years, and doing startlingly well in many highly exposed and frequently attacked applications.

But my arguing friend had his own counter to these numbers: "Vulnerability counts don't mean anything!"

The arguments against counting bug reports alone are many. First, many of the announced vulnerability releases contain more than just one bug. But as I noted earlier, when I do a direct comparison looking for each individual exploit hole, Microsoft compares even more favorably than it does on announcement numbers alone.

A second valid criticism is criticality. Who cares if your product has fewer exploits than a competitor's if your exploits are more dangerous? Certainly, remote exploits allowing complete control without any client-side action initiated are more of a concern than local privilege escalation attacks; I can't dispute that. But check any of the products I've mentioned above and you'll find all of them are rife with remote "complete control" exploits.

Another valid concern is how many of the announced exploits are patched versus unpatched. Vulnerability lists, as much as I like them, don't do a great job on verifying when various exploits have been patched. I find this out all the time because I use different exploits to break into what is supposed to be "unpatched" software but turns out to be already sealed up.

A fourth criticism is that announced vulnerabilities don't take into account all the exploit holes that get silently patched by the vendor without ever notifying a vulnerability list, or all the potential zero day exploits that could be out there. I don't feel this criticism is nearly as valid as the others. It's like saying that "Yes, Acme Airlines had 10 crashes this year, but you don't know how unsafe the other airlines – the ones that didn't have any crashes this year -- really are."

The ultimate truth is that unless you, or someone else, with good experience in security code review examines all the involved source code, you really don't know how secure something is or isn't.

But I don't think you should outright discount a viable metric, such as vulnerability counts, simply because the numbers don't support your side of the argument at the time.

By the way, you may be wondering about my sparring partner for this argument. It was a Microsoft code reviewer. Even though the vulnerability numbers are looking better and better for Microsoft every year, he doesn't believe in numeric counts. He feels that the best test of security for an OS or application is how well it performs in actual usage.

He said, "Suppose your application only has one vulnerability, but it is that bug that leads to an exploit that causes massive damage in your company. You don't care about numbers or metrics. You only know the vendor let you down. It's that fact that I go to work with each day."

Man, I can't win!

Mobile Security Insider: iOS vs. Android vs. BlackBerry vs. Windows Phone
Recommended
Join the discussion
Be the first to comment on this article. Our Commenting Policies