Saving TDD from itself

Unit testing is now widely practiced and poorly understood

Gource visualization of Linux source tree

Saving TDD from itself

Before Extreme Programming and TDD (Test-Driven Development) were absorbed into the Agile movement, unit testing was well understood and infrequently practiced. Some fifteen years on, unit testing is now widely practiced and poorly understood.

Recently, many loud and repeated calls to dump TDD have been made. If this results in dumping unit testing as it has come to be, I'm all for it. But let's not be too hasty.

Development-Driven Testing

Although it is called "test-driven," the key to understanding TDD is its second D: development. The primary role of TDD is to produce a programmer's scaffolding in self-checking micro-steps. This forces programmers to make their notional solutions concrete from the start, which tends to prevent questionable experiments or constructs ("code-smells".) If followed diligently, TDD has the beneficial side-effect of producing an executable regression test suite that is kept consistent with the codebase under test. This is necessary for continuous integration and has enabled rapid development cycles. TDD achieves all this through simple test cases programmed in a standard framework at the same time as application code is written -- hence the "test-driven" name.

However, I've always thought that DDT, Development-Driven Testing would be more apt. The TDD canon has next to nothing to say about what to test and why. As a result, "testing" in TDD has come to mean the creation of a programmer's scaffolding that minimally exercises basic functionality. In its fifteen-year arc, TDD has been the subject of hundreds of conferences, webinars, articles, blogs, and at least a dozen books. In all that I've seen, there are no new ideas for test design—only elaborations of technical minutiae for x-unit frameworks and mocking tools. The essential problems of test design, test oracles, and test effectiveness are not part of the TDD vocabulary.

The TDD backlash 

With the enthusiasm that Agile brought to TDD, its development focus engendered a kind of cargo-cult that produced oceans of ineffective test code. I've seen, time and again, tens of thousands of TDD-produced tests that are worse than a waste of time. For example, the test object sets a widget (e.g., check box, scroll frame) foreground to green, then asserts that the widget's get method for the foreground color property returns green. The test passes if the returned property is green. This is repeated for every widget in the UI, checking its returned color. Developers can claim they have thousands of tests that pass, but the result is meaningless and brittle.

This so-called "testing" is a waste. It consumes programmer time and energy that could be better applied to effective testing. It often results in a bloated test-codebase that rapidly becomes unmaintainable. As a result, it is not uncommon to see TDD-produced test-codebases abandoned for being too brittle and letting too many virulent bugs escape.

Not surprisingly, there's been a backlash to this waste. David Hansson's reflections are instructive.

Saving TDD from itself

As a programming strategy, TDD achieves many useful results. As a testing strategy, TDD cannot be trusted.

TDD can be extended to achieve meaningful testing without the waste. Here's how.

  • Develop your code following TDD practices, except for its feeble testing notions. Instead, just implement a few simple smoke tests for each method or message in your x-unit framework.This will produce a useful scaffolding and achieve TDD's  development benefits.
  • Maintain code hygiene at all times using static analyzers and style critiquers.
  • When your components are complete for a sprint or release, design a test suite that fully exercises the variation of their input and configuration domains, causes and effects, and activation sequences. The test design patterns in Testing Object-oriented Systems explain it all. Implement these tests with your x-unit framework.
  • Run the tests and correct any bugs you find.
  • Instrument your code and rerun the test suite to measure the decision coverage (at least) or mutant-kill ratio (or better yet, both) that your test suite achieves.
  • Scrutinize the code that isn't reached or hides mutants. Determine why, then tweak your test suites to reach these blocks and/or kill the mutants. Make judicious use of mocks only as necessary to reach coverage goals.
  • Repeat until all tests pass and your test suite produces at least 85% coverage or mutant kill-ratio. Evaluate the risks of failures in any uncovered/hiding code. Stop testing if the risk is tolerable and document your analysis in a test report. If the risks aren't acceptable, tweak your tests and repeat.

TDD's scaffolding and focus are too good to lose. Adding test suites designed to find bugs replaces expensive waste with effective verification.

This article is published as part of the IDG Contributor Network. Want to Join?