Stop fighting better security

These four excuses aren't good enough to exempt you from bolstering your defenses

I’m always surprised by how many professionals actually fight the computer security improvement process. The very people that are supposed to be security advocates often put up interesting theoretical roadblocks to improving defenses. Here are some of the excuses I commonly hear:

“Once they get physical access, it’s game over anyway.”
I’ve often heard this when a new locally exploitable or client-side vulnerability is found. The idea is if the attacker has physical access to the computer or can convince the user to run an untrusted executable, there is no valid defense that will stop all malicious attempts. And this is true.

[ RogerGrimes's column is now a blog! Get the latest IT security news from the Security Adviser blog. ]

But if you believe that statement, why bother putting a pesky password-protected logon screen on your computer? Why put a lock on the front door of your home if the intruder can bypass it by breaking a window, hacking the garage-door opener, or a variety of other methods? Am I to believe that any defense is a poor defense simply because it cannot stop all attacks?

Defense in depth is the offsetting answer to this particular roadblock. No defense by itself can stop everything, but every additional, incremental defense builds a stronger wall.

“We knew about this security vulnerability, but securing it would negatively impact customers.”
Nearly all security processes have some sort of end-user inconvenience trade-off, so this is a valid concern. But computer security is rarely an on-or-off binary decision. Show me any security issue and its end-user concern and I can find middle ground.

In one recent example, the vendor had an opportunity to close many significant security holes that had existed in the product for years. But doing so broke many existing third-party add-on products. It was rightly feared that if the update broke existing applications, the end-users would blame the product update and not the third-party vendor's buggy code. Most end-users wouldn’t be delighted by the improved security features, I was told. They would be yelling about the update errors and possibly buy a competitor’s product, so the new protections were not implemented in the current product.

Unfortunately, a critical, malicious vulnerability -- which would have been closed in the new code version -- was publicly disclosed a few months later and led to the embarrassment of the vendor. After this incident, the code was updated and the hole closed. Why did it take more pain than necessary to get where we were going to end up in the first place?

If you’re a developer facing potential third-party product incompatibility issues, you can almost always build in a checking routine that looks for installed products with known incompatibility issues and warns the user during install. Or allow the end-user to choose whether they want the new controversial feature enabled by default. Prompt them, warn them. Turn off the new security protections if one of the incompatible products is detected, or run a second instance. But don’t let the default decision allow a known, critical vulnerability to go unaddressed and affect all users.

“We knew there was a vulnerability, but we didn’t think it was that bad.”
Every security vulnerability should be ranked by criticality. A remote buffer overflow has higher criticality than a local DoS (denial of service) problem. But I’m talking about the people who just ignore the problem. Case in point: A few months ago, I discovered a remotely exploitable directory traversal exploit that gave me root access against an Internet-accessible device that was installed in more than a million consumers' homes. It served as the consumer’s primary access to the Internet and provided subscription-based and on-demand digital media content. The device used BSD as the underlying OS and a very old Internet Web server.

When I reported the problem to the programmer, he said that they had known about the problem for a long time, but they could not think of how it could be exploited. I was dumbfounded. This was remote admin access -- a pretty straightforward hack.

I told him that the consumer’s credit card information could be stolen. I said the customer’s service could be interrupted, that company services could be stolen, or that porn could end up in innocent customer homes. Further, I added that the simple exploit I was using could easily be wormed and turn those million devices into a bot army for attacking other targets. 

The bug was added to the resolution database the next day. If you’re not a trained security person or if you don’t practice reasonable threat modeling, don’t attempt to guess for yourself how bad the bug is.

“We don’t need strong encryption.”
This is normally said when the developer needs to obfuscate plaintext data for some confidentiality reason. Instead of implementing widely used, industry-accepted, decade-trusted real cipher algorithms, they make up their own hashing or encryption routines. Some are very obvious, using Base64 encoding, simple substitution replacement, or the computer’s IP address or computer name as the private key.

Problem is, although strong encryption may not be needed now, it may be needed later. Once coded, it may never be changed. Legacy code often ends up in newer programs and applications, and what once was nice to have becomes a central, mission-critical need. If you must protect the confidentiality of data, use trusted cipher algorithms and routines. Forget about the halfway attempts.

The most startling point in all of these statements is that they were said to me by career computer security professionals, not by unknowing outsiders. Sometimes it makes rational sense to skip real security or to lower the security bar, but most of the time it is just incorrect rationalization and laziness. Don't fall into the trap.

Recommended
Join the discussion
Be the first to comment on this article. Our Commenting Policies