Moore’s Law is also driving source code analysis forward. Exhaustive analysis of code can chew up vast quantities of computing resources. At Microsoft, for example, PREfix performs deep analysis of millions of lines of C and C++ code, but it can only run infrequently as part of a centralized build processes. Developers typically use PREfast, PREfix’s less resource-intensive cousin, for routine daily checking.
As available computing horsepower grows, we can devote more of it to program verification. Also, Engler notes, faster CPUs tend to marginalize the optimization work that was the traditional focus of compiler professionals. “As the range of applications that benefits from optimization gets smaller,” he says, “there’s been a push to find something else interesting.”
Fortify’s Chess adds that there has been a fundamental philosophical shift in how we approach the issue of source code analysis. Early researchers were interested in program correctness, he says. The goal was to prove that “my program will, under all circumstances, compute what I intend it to compute.” Now, he says, the emphasis has switched to a more tractable form of proof: that “there are specific properties my program does not have.” Buffer overflows and deadlocks are examples of such properties.
Microsoft’s Chris Lucas, group program manager for Visual Studio Team Developer Edition, thinks that better rules, more than better techniques, account for the growing efficacy of source code analysis. As did Coverity’s analysis of Linux code, Microsoft’s analysis of Windows code proved to be an effective way to flush out bugs. Within Microsoft, the rule set evolved in an iterative way. “First the PPRC [Programmer Productivity Research Center] identified some interesting rules,” Lucas says, “and then they were applied to the Windows source base.” The rules that yielded important defects without creating too much “noise” were codified and then the cycle repeated. “It’s all about tuning the rule set,” Lucas explains.
Benjamin Chelf, who studied under Engler and is now Coverity’s chief analyst, agrees that today’s analyzer is not your grandfather’s lint. “This isn’t just another tool that’s going to spew a ton of useless warnings,” he says. Finding the right level of analysis has always been the challenge for source code analysis tools. With too little precision, a tool can flood the developer with false positives. With too much precision, it could take years to complete an analysis. Fortunately, the state of the art has improved in recent years. Solutions are now striking the right balance, Chelf says, and he invites developers to revisit their old assumptions.
Everyone agrees that modern source-code analysis improves software quality. However, techniques differ with respect to the scope of analysis and the amount of specific knowledge of operating systems and application frameworks needed to do the job.