I’m always surprised by how many professionals actually fight the computer security improvement process. The very people that are supposed to be security advocates often put up interesting theoretical roadblocks to improving defenses. Here are some of the excuses I commonly hear:
“Once they get physical access, it’s game over anyway.”
I’ve often heard this when a new locally exploitable or client-side vulnerability is found. The idea is if the attacker has physical access to the computer or can convince the user to run an untrusted executable, there is no valid defense that will stop all malicious attempts. And this is true.
[ RogerGrimes's column is now a blog! Get the latest IT security news from the Security Adviser blog. ]
But if you believe that statement, why bother putting a pesky password-protected logon screen on your computer? Why put a lock on the front door of your home if the intruder can bypass it by breaking a window, hacking the garage-door opener, or a variety of other methods? Am I to believe that any defense is a poor defense simply because it cannot stop all attacks?
Defense in depth is the offsetting answer to this particular roadblock. No defense by itself can stop everything, but every additional, incremental defense builds a stronger wall.
“We knew about this security vulnerability, but securing it would negatively impact customers.”
Nearly all security processes have some sort of end-user inconvenience trade-off, so this is a valid concern. But computer security is rarely an on-or-off binary decision. Show me any security issue and its end-user concern and I can find middle ground.
In one recent example, the vendor had an opportunity to close many significant security holes that had existed in the product for years. But doing so broke many existing third-party add-on products. It was rightly feared that if the update broke existing applications, the end-users would blame the product update and not the third-party vendor's buggy code. Most end-users wouldn’t be delighted by the improved security features, I was told. They would be yelling about the update errors and possibly buy a competitor’s product, so the new protections were not implemented in the current product.
Unfortunately, a critical, malicious vulnerability -- which would have been closed in the new code version -- was publicly disclosed a few months later and led to the embarrassment of the vendor. After this incident, the code was updated and the hole closed. Why did it take more pain than necessary to get where we were going to end up in the first place?
If you’re a developer facing potential third-party product incompatibility issues, you can almost always build in a checking routine that looks for installed products with known incompatibility issues and warns the user during install. Or allow the end-user to choose whether they want the new controversial feature enabled by default. Prompt them, warn them. Turn off the new security protections if one of the incompatible products is detected, or run a second instance. But don’t let the default decision allow a known, critical vulnerability to go unaddressed and affect all users.