I live in Key Largo, Fla., near one of the world's most beautiful reefs, and I scuba dive almost every day I can. I've been diving for 20 years now, with hundreds of dives under my weight belt, and I'm one of the most safety-conscious guys I know.
Yet there I was 100 feet down on the ocean floor without any air to breathe and nobody to help.
I thought I was the cautious type, but on this particular dive I had accumulated multiple risks, each of which I had faced before individually. I made the mistake of letting them layer on, and I let my memory of successfully facing those risks one at a time lull me into thinking the unexpected could not happen.
I couldn’t breathe due to a regulator failure -- at the deepest part of my dive, while diving by myself to recover a stuck boat anchor, without my pony bottle. I didn’t panic, but I certainly wasn't happy to be at zero pounds of air 100 feet down. I did what my training taught me: a free ascend, slowly blowing out air so that my lungs wouldn’t explode as I rose.
I survived without injury. But I was left wondering why I had broken so many common dive rules in my rush to retrieve a stupid boat anchor.
Turning a blind eye to risk
A regulator had never failed on me in nearly two decades of diving, so I had become complacent. Likewise, I had knowingly accepted additional risks -- no dive buddy, no pony bottle, no regulator backup -- because I had faced each of those risks separately, where one failure could easily be overcome.
I see the same mistakes every day in the computer world. Knowledgeable, experienced people who should know better accept incremental risk after incremental risk over time. Then -- boom! -- something really bad happens. If you look at any company that has suffered a major breach in the last two decades, you can point to a growing cascade of risks that were accepted and became business as usual.
As I noted last week, you can’t simply collect vulnerabilities and ignore them. Normally, the workers on the frontline aren't the ones who ignore the risks -- they typically raise the alarm. But I often see respected managers walk into project meetings, listen to the risks, and blow them off as small, unlikely, or inconsequential.
I'm involved in a project now where every single critical risk I raise is blown off as "not a big deal." It's true that if everything works out perfectly, the project and deployment will go swimmingly. But if any of the small risks blow up, the whole project will be killed or at least significantly delayed.
Most successful hacks occur due to multiple vulnerabilities. A series of mistakes opens up the holes to the point where the hacker has an exploit superhighway once the first hurdle is cleared. Every company living with a high percentage of unpatched software falls in this category. That’s no small liability. It's a big one, and when coupled with a few little risks, it's an open invitation to hackers.
Other common "little" risks I see accepted all the time -- many of which aren't little -- include the following:
- Identical admin credentials across multiple assets and domains
- Too many (permanent) members of privileged groups
- Too many groups whose purpose no one can remember
- Hard-coded passwords
- Little or no user training about social engineering risks
- Inconsistent security policies across managed domains
- Overly broad permissions and privileges
- Unnecessary software and services no one uses
- Unverified build images
- Poor de-commissioning of user accounts, service accounts, groups, or applications
- Poor security auditing
- Overreliance on intrusion detection -- or intrusion detection that fails to detect common attacks
- Poor operation and management of existing security solutions
- Continuing use of weak and vulnerable protocols
- Poor security domain separation
- Poor software coding practices
- Lack of disk encryption
How to shake up the status quo
If you work in a culture where people blow off incremental risk, speak up! Don't become part of that culture. Push back. Be the voice of reason.
You can help your argument by listing accepted risks -- then linking them to attack scenarios detailing how a bad guy would work from the initial hack to your Holy Grail assets. For extra punch, create a video where you exploit one of those chains of vulnerabilities. Make it short, between 30 seconds and 2 minutes long; senior management has more than its share of ADHD types. End the video with the capture of a high-value asset.
This three-step approach packs a wallop:
- List the accepted risks
- Offer scenarios showing how the risks could result in the compromise of high-value assets
- Shoot a short video demonstrating the most compelling of those scenarios
I've yet to see a senior management team that didn't listen and change course.
One certainty: If you don’t take action, those risks will accumulate, and you’ll end up with the security equivalent of no air to breath on the ocean floor.