Compuware targets untested code
DevPartner Fault Simulator 1.0 shows promise but has several faults of its own
One of the little-discussed realities of software development is that many packages ship without having been fully tested. Despite the new emphasis on continuous regression testing and the sophistication of quality-assurance tools, there's an intractable portion of code that just isn't exercised.
This untested code generally lives in routines that handle rare errors, such as disks being full, memory being exhausted, or network services suddenly going offline. These problems are exceedingly hard to duplicate in a testing lab, so the code that handles these exceptions evades thorough testing.
Compuware's DPFS (DevPartner Fault Simulator) 1.0 is the first product to tackle this problem directly. It simulates numerous hard-to-create errors by intercepting calls to the Windows OS and returning an error indication or failure event. By these means, software is made to exercise all the code that handles exceptions. It's an excellent concept. Unfortunately, DPFS is hurt by its limited support for Windows code and lack of integration with other tools.
DPFS comes in two flavors: a command-line utility and a plug-in for Microsoft Visual Studio .Net. In both, testers specify which faults they want to simulate, where in the program the fault should occur, and how many passes through the code should occur before the fault is triggered. This information is recorded in a file that then drives DPFS.
Once the application starts up, DPFS monitors it and jumps in where and when the fault should be duplicated. DPFS records the results in a log file, which can be examined to glean insights into the program's error-handling mechanisms.
DPFS tests for two broad categories of faults: environmental problems (such as disk full, lack of memory, missing files, or insufficient access privileges) and .Net-specific exceptions. The latter provides a wide palette of possibilities, ranging from I/O errors and network-access problems to XML errors, exceptions in various collections, and even client SQL issues. Unmanaged code triggers only environmental faults, whereas managed .Net code sets off both kinds.
I was considerably excited by the prospect of working on little-tested code. However, as I began testing programs with DPFS, I began to rethink the value of the tool. First, lots of exception code is plain vanilla stuff: You post an error dialog and you close things down gracefully. Only with complex shutdowns or deeply nested exceptions is there cause to worry that code isn't provably correct by code review.
The second problem is that testing the code requires lots of different stand-alone runs of DPFS, because the first fatal fault shuts down the program under test. To test all possible exceptions, I had to run lots of individual tests and pore over the results.
The most important results of exception testing tend to be those you see as a result of your own code (the dialog box warning, the shutdown, the log file, and so forth). DPFS provides additional information about what's going on, including call-stack traces and the error-handler data on .Net code "catch" and "finally" blocks. This data is useful for tracing the path taken in the exception processing.
Unfortunately, DPFS stops here. The data it generates cannot be integrated with any known code-coverage tool -- not even Compuware's own DevPartner product. Hence, a principal benefit of DPFS -- demonstrable 100 percent code coverage -- is unattainable.