It was an excellent help desk. Then, my correspondent explained, his CIO, wanting measurable results, established incidents resolved per analyst per week as an appropriate metric for assessing performance.
The company in question had three help desks: one for each major location. As my correspondent explained the situation, the one he managed performed far more poorly than the other two, and he was chastised for his organization's subpar showing.
[ For more on Lewis's First Law of Metrics, see "Stupid consultant tricks." | Also on InfoWorld.com: Get Bob Lewis's continuing IT management wisdom in his Advice Line blog and newsletter. | Find out why running IT as a business is a train wreck waiting to happen. ]
What was he doing wrong? He'd established a user self-sufficiency program, that's what. His analysts spent quite a lot of their time educating employees to be more independent and sophisticated in their use of technology. The result was fewer incidents for analysts to resolve, coupled with higher levels of employee effectiveness.
It was a superior outcome that resulted in poor performance metrics.
"If you can't measure, you can't manage," legendary management guru Peter Drucker once asserted. He was right -- just not right enough. The fact of the matter is it's a lot easier to get metrics wrong than right, and the damage done from getting them wrong usually exceeds the potential benefit from getting them right.
Lewis's Corollary to the First Law of Metrics: If you mismeasure, you mismanage
Last week's missive on stupid consultant tricks introduced Lewis's First Law of Metrics: You get what you measure -- that's the risk you take. Our help desk tale of woe leads to us to Lewis's Corollary to the First Law of Metrics: If you mismeasure, you mismanage.
Imagine that instead of working in IT, you ran the highway patrol. You have a decision to make: Do you rely on unmarked cars and speed traps, or do you instruct everyone on the force to cruise the highways in their regular vehicles?
The right answer depends on clearly understanding what you want to accomplish, then turning that goal into a metric.
If your goal is to catch speeders, your metric will be the number of tickets issued per officer per hour, and you'll go with the unmarked cars and speed traps. If, on the other hand, your goal is to minimize the amount of speeding on the highways, you'll make sure every police car is highly visible, cruising exactly at the speed limit. After all, if drivers don't see a police car, they might count on luck (and their overestimated ability to spot unmarked cars) and continue to speed unless caught. Only the most egregious nitwits will pass a cruising police car.
Sadly, you'll have a more difficult time establishing a useful metric if you prefer to prevent speeding rather than capturing speeders. See how you do and what kind of data you'd need to track it.
One reason SMART isn't always smart
SMART is a popular goal-setting technique. It stands for (with some variations): specific, measurable, actionable, relevant, and time-bound.
Who could argue with a formulation like that? The answer: Anyone who, like the highway patrol that decided to cruise rather than catch, prefers prevention to troubleshooting. That's because, with few exceptions, prevention ranges from being harder to measure to being indistinguishable from "What problem? I don't see a problem."
Successful prevention is indistinguishable from absence of risk, as anyone knows who worked on Y2K projects, only to be accused of wasting corporate funds on a phony problem when nothing blew up on Jan. 1, 2000.