At the heart of every source code analyzer are the rules that describe patterns of error. Analyzers provide a general set of rules and typically enable customers to add new rules codifying knowledge of their own systems and programming practices. The analyzer included with Compuware’s DevPartner Studio, for example, can be extended with rules that match patterns in the text of source code. According to product manager Peter Varhol, this technique is used often to enforce rules about coding style.
Coverity’s rule-writing language, MetaL (Meta Language), can be used for the same purpose. Customers have also used it to propagate bug fixes. Chelf cites one case where a software product failed in the field. After days of debugging, programmers found the erroneous function call that caused the failure. “They wrote a MetaL check to comb the code for other instances,” Chelf says, “and found several that would have caused the same problem.”
Fortify also supports user-written rules. It’s crucial, Chess says, that customers who don’t want to reveal proprietary details about their systems be able to work independently to expand the analysis coverage. PREfast, bound tightly to the Microsoft compiler, does not support user-written rules, but FxCop does, using .Net itself as the rule language.
Could there be a standard way to represent these rules, enabling direct comparison of analyzers and pooling of knowledge about common patterns? In principle, that’s possible; in practice, it seems unlikely anytime soon. Extensibility is important, but vendors of source code analyzers must first convince programmers to take another look at a class of tool that many have long dismissed as irrelevant. Just give it a try, they say, and see if a scan of your source code pinpoints important bugs that you wouldn’t otherwise have found.