And second, the reason you can't just implement industry standard practice is that in some cases doing so would mean implementing an unacceptable countermeasure.
Now we get to the good part: what happens after a security incident. The answer, I think, should be an analysis of how it happened and whether any acceptable countermeasure would have prevented it. Based on the severity of the incident, you might decide to redefine what's acceptable. Or you might not, figuring there are times when cleaning up a mess afterward is better than the cost of avoiding it.
Which gets to one of the many complications that prevent me from giving you a complete answer here: I've limited this discussion to countermeasures, when in fact you also need to define potential responses to the various threats in the threat inventory. You do what you can to prevent fires from starting, but need a fire department to handle the ones that break out anyway.
Now (at last!) we're ready to talk KPIs. If your goal is to implement acceptable countermeasures for all threats in the threat inventory, here are logical KPIs:
- Percent of actual attacks that are not listed in the inventory (any that aren't on the list constitute a planning failure).
- Percent of actual attacks that (1) were successful; and (2) would have been thwarted by an acceptable countermeasure you didn't implement.
- Percent of successful attacks that could not have been thwarted by an acceptable countermeasure and for which you had no planned response.
[ If you find this approach to metrics useful, you'll find much more on the subject in Bob's new book, "Keep the Joint Running: A Manifesto for 21st Century Information Technology." ]
Politically, of course, none of this matters because no matter how you assess risk and your response to it, after a problem occurs you're guilty.
Which means communicating the nature of your security plans, and their limitations, over and over again, is far more important than something as relatively trivial as measuring their success.