I've been in the computer security field for nearly three decades. During that time, I've watched it go from bad to worse to ugly.
Today, the average computer security defense is so bad, we had to invent a new paradigm a few years ago called "assume breach." This phrase admits that our security controls are so inadequate that we concede defeat in preventing hackers from gaining access to our environments. Instead, we concentrate on limiting the damage attackers do once they're inside our "hard outer shell."
This is actually the way we need to think about computer security today. If you have anything worth stealing, you've been breached. Every computer defense strategy must assume breaches have occurred and will occur, yet remain dedicated to preventing them.
The problem I have with the "prevent breach" imperative is that in most cases, the defenders aren't really trying. They say they are. They may think they are. But they aren't.
For example, in most environments, two attack vectors account for 99 percent of all successful attacks: unpatched software and social engineering. But instead of defending our environments in a risk-aligned way, we concentrate our efforts on almost everything else.
Scenes from the security war
Imagine two armies, one good and one bad, engaged in a long-term fight on a field of battle.
The bad army has successfully managed to compromise the good army's defenses again and again by throwing most of its troops against the good army's left flank. Surprisingly, instead of applying reinforcements to its left flank, the good army keeps its troops evenly spread. Worse, it decides to pull troops from its left flank to man anti-aircraft weapons in response to rumors that the enemy may attack from the air. Then the defending army wonders in vain why it's losing the battle.
This scenario describes how most companies defend the security of their computer systems. In general, they simply don't align their resources -- money, labor, and time -- against the threats that pose the greatest risk.
This misalignment is due to several factors, including that enterprise defenders often fail to:
- Identify in a clear and timely way all the localized threat scenarios they face
- Focus on how initial compromises happen versus what happens afterward
- Understand the relative risks of various threats
- Broadly communicate threats ranked by risk to all stakeholders, including senior management
- Efficiently agree upon and coordinate responses to risk
- Measure the success of deployed defenses against the threats they were intended to mitigate
All these implementation weaknesses lead to a wholesale misalignment of computer security defenses against the highest-risk threats.
How did it get this way?
Well, for a lot of reasons, but key to understanding our rampant security misalignment is to understand human nature.
In general, most of us fear the wrong things too much. For example, most people fear dying in a plane crash or by being bit by a shark far more than they do the car ride to the airport or the beach, though the car ride is thousands of times more likely to result in serious injury or death.
This natural but irrational ranking of fear holds security in thrall. Even the best IT security defenders have a hard time ignoring the onslaught of new security threats covered in the mainstream media every day. Malware and exploits now get names and logos. You try to focus your attention on just the important things, but it's "squirrel, squirrel!"
The fix: A data-driven security plan
But you, dear reader, don't need to languish in fear and misinformation. You can join the upper echelon of computer security defenders simply by being aware of the threats most likely to impact your environment -- and focusing your efforts on defending against them.
To do that, you need data. In a nutshell, a data-driven computer security defense plan includes the following steps:
- Collect better and more localized threat intelligence
- Rank risks appropriately
- Create a communications plan that efficiently conveys the highest-risk threats to everyone in the organization
- Define and collect metrics
- Define and select defenses ranked by risk
- Review and improve your defense plan as needed
Capturing relevant threat intelligence
Better threat intelligence data is the key to success. Most companies' threat intelligence programs consist of reading a few industry security newsletters and, if they're lucky, getting some basic metrics about what types of attacks are being thrown against their enterprise.
This superficial approach falls woefully short and surfaces the wrong metrics. For example, the typical threat or vulnerability matrix report will tell you how many malware programs your antimalware program detected and cleaned and how many unpatched vulnerabilities a vulnerability scanner found. This is mostly useless information.
Vulnerability scanners find thousands of vulnerable exploits, most of which are ranked as the highest priority and must be fixed right away. Readers of such reports quickly tune them out -- with good reason. It's far more important to understand which vulnerabilities are being actively used, or will most likely be used, against your organization.
You might have 5,000 Heartbleed vulnerabilities all over your environment, but if your firewall is blocking the requisite ports, you can probably relax a bit. Numbers and criticality alone mean little. It's more important to know what's being used to compromise your company.
By the same token, discovering how many malware programs your AV scanner detected and cleaned is like caring about how many invalid packets your firewall dropped. The metric is useless. In every case, it's so useless that no one even bothers to read the report. If that metric went up or down, what would it tell you? Not much.
Where you should focus instead
A far better metric is how many malware programs your antimalware software failed to detect and for how long. Now that's useful.
Most antimalware programs are horrible at detecting malware in the early hours of a zero-day, but become increasingly accurate as the days wear on. Eventually, nearly every antimalware program detects the malware accurately -- what you need to know is the elapsed time before malware was detected in your environment.
How can you find this out? Here's one way. Install an application control (aka whitelisting) program in audit-only mode. When your antimalware program detects malware, correlate the detection time with the whitelisting program's detection of when the unapproved program first installed or executed. You won't get everything, but you'll detect most things.
If you do this for each detection of malware, you can create a living organizational metric that tells you how well your antimalware software is doing over time. If it gets less accurate, you can lobby the antimalware vendor for improvements -- or switch products.
You'll also know how long the malware program had free range on the computer it was finally detected on. Did the user access critical data or use elevated privileges while they were compromised? If so, you might need to take additional steps beyond saying that everything is OK because the scanner cleaned it.
More important, how did the malware get on the user's workstation in the first place? Was it unpatched software, social engineering, a misconfiguration, or a missing defense? Asking and answering these questions is the way to begin creating a better, data-driven defense.
The final analysis
The key is to change your thinking -- from simple detection and eradication to focusing on how something got by your defenses in the first place. Worrying what an attacker did once they domain admin credentials were obtained is like worrying about your brakes after your car was stolen.
Only by better protecting the left flank and preventing the car's theft in the first place can you begin to make a better defense. Everything else is accepting defeat.
It took me nearly 30 years to figure why so many defenders were getting it so wrong. I plan to spend the rest of my career spreading the lessons I've learned.