A subtle trend has been emerging over the last few years and it doesn’t appear to be abating: The number of insecure computer security products is growing. The very products designed to protect us are often the ones introducing the vulnerabilities.
These are products with weak configuration protection, easy-to-exploit denial-of-service attacks, and easy security bypasses. I’m talking firewalls with old Web server code and exploitable management interfaces, anti-virus products with buffer overflows, gateway products susceptible to DNS cache poisoning, and in-line filtering software vulnerable to script injection.
Although most of the products I review are commercial products, open source products are just as vulnerable. For example, Ethereal, my favorite sniffer, seems to be caught in a beyond-ridiculous cycle of exploitable packet dissectors. It’s almost becoming the rare security software product that doesn’t end up having to be patched once or twice a year.
Even security appliances are taking a beating. Many are running an outdated Linux or BSD kernel, with no easy way to update. Some vendors will tell you updating the kernel will void the warranty. Almost all come with one or more undocumented listening ports that an intruder can probe.
Many security appliances don’t require secure VPNs between the management client and the server. I’ve seen many a vendor’s VPN tunnels that rely on HTTPS, SSL, or SSH not work -- even though it appeared to be doing so.
This is a travesty. Isn’t anyone testing these features before they ship? Shouldn’t security vendors ship their products with the latest patched stuff? Shouldn’t the products contain some sort of auto-update routine? Maybe the product shouldn’t automatically apply the patch, but there should be some sort of mechanism with which one can automatically search for and download the latest patches, then notify the administrator to apply them.
Customers want to believe that our security product vendors are doing a better job at secure coding than nonsecurity product vendors. Certainly, it isn’t an abnormal expectation. The truth is that security vendor programmers are just as overworked as any programmers in any company. Oftentimes, like their counterparts, they haven’t received specific training in writing secure code or using secure coding practices.
While I was teaching advanced computer security classes at a very well known security vendor, one of their products was found to have a remotely exploitable buffer overflow affecting thousands of customers.
The vendor first learned about the exploit on a zero-day public mailing list. The guy in charge of researching the product and fixing the flaw was in my class. He had been waiting for years to take this particular class and refused to leave. What should have taken on a crisis-mode level of criticality and have been solved ASAP took over a week to resolve, and I’m fairly confident that thorough regression testing was not involved.
Customers might have assumed that a moderate-size team of programmers would be assigned to solve this problem and it would be priority No. 1. In reality, it was one guy, part-time, trying desperately to recreate and solve the problem using VMware on his laptop, all the while not missing a single class slide.
This isn’t a pretty story, but it’s probably not unique in the computer security field.