As the new InfoWorld security columnist, I’ve not backed away from controversy. I have intentionally picked hot topics in order to generate reader interest and feedback. And nothing generates more debate than the topic of full disclosure.
Full disclosure is the idea that all security bugs found, whether by the vendor or a third party, should be disclosed in their entirety in a public forum as soon as possible, whether or not the vendor is notified, and whether or not a reasonable defense is possible. The thinking behind this is that full disclosure forces the vendor to address the problem faster than they normally would and helps administrators to prepare defenses.
Years ago I was a strong advocate on the full disclosure side. Anyone that didn’t believe in full disclosure was an enemy of my utopian world and helping to perpetuate bad coding. But lately I’ve been re-thinking my decision.
What changed? Well, my collective personal experience over the last 19 years. Full disclosure advocates claim that all defects should be publicly shared to benefit the common good. If an exploit is known and not shared, then the vendor might be slower to fix the hole. This statement is valid and true in most cases: Nothing focuses a vendor’s attention than the whole world reading about the exploit and hackers looking to take advantage of it.
Practically, if a hole has been discovered by someone, it has probably been “discovered” by lots of other people who aren’t as vocal. Some of those people are bound to be black hat hackers, who will use the holes to exploit systems.
If the vendor does not publicly reveal the hole, the people who know about the hole are free to exploit it while the consumer remains clueless. Fortunes can be stolen, private information accessed, and secrets revealed. But if the hole is publicly disclosed, administrators have an opportunity to react and put up defenses to counter the exploit, even before the vendor has had a chance to patch the hole.
I still believe most of that line of thinking, but the practical reality of history has challenged my original beliefs. Here’s why:
First, most fortunes are stolen using disclosed vulnerabilities. Forget the nebulous theory that black hat hackers use undisclosed vulnerabilities to steal data and money. They can, and they do, but the overwhelming majority of black hats use publicly disclosed vulnerabilities and vulnerabilities from mis-configuration and low-hanging fruit. Why invent something new if you can use publicly available tools against publicly available exploits?
Second: user responses. Research paper after research paper shows that a large percentage of computers remain unpatched over a year after the patch is released. The admins that are going to patch systems do so relatively quickly, within the first month after a patch's release. This group is less than 50 percent of the admins out there. The rest don’t patch until much later, often not until after a successful exploit causes damage.
Some computers are never patched. Sniff the Internet and you’ll see Code Red exploits coming from vulnerable IIS 4 servers (the patch was first released in June 2001), scans for blank SQL passwords, and scans for Apache Web server exploits from 5 years ago. Rarely does a large financial theft result from a zero-day exploit. Almost all occur from aged exploits with published patches available.