Agitator 3.0 puts Java code through the wringer

Detail-oriented structured rules, new Domain Experts streamline code testing

Anyone with a 2-year-old knows that one of the most effective ways to test your software is to put it in front of the child: If there’s any odd combination of clicks and inputs that will crash the program, the child will invariably find it. Agitator 3.0 is certainly far more rational in its testing procedures than a toddler, but it takes a similar tactic, handily testing your Java code by sending over a maelstrom of test values to ferret out errors.

The package will parse the code to look for potential problems and then build the testing code to target these dangers, choosing numbers and dates from a specific range and adjusting the range according to the constants it sees in your code. If a method seems to be using large values, the random-number generator sends large values its way; if it wants dates, it sends dates. If you have a better idea of the types of data that might cause trouble, you can also customize this and focus the selection of test data by generating your own subclasses, known as factories, to the test procedures.

The code testing is only half of the game. The software will also enforce many standard rules of thumb for developing Java code, such as closing your JDBC connections in the finally block to guarantee that the connection is truly closed. You may turn these coding rules on or off and, if your shop feels the need, add some new ones.

Agitator bundles all of this information into a development “dashboard” that displays the success or failure of the various classes and packages of code with color-coded green and red bars. This mechanism may be ideal for a project manager who is attempting to herd developers along the same path. Running these tests daily will enforce the rules automatically.

To test Agitator, I set it on some of my old code, a process that is very easy if you happen to use Eclipse because the application is built as a set of Eclipse plug-ins. After opening up the workspace, I pressed one button and Agitator started scanning my code for errors and pushing random values at my methods. When it was done, the results appeared in a list of errors and warnings, much like the messages from the compiler complaining about errant import statements or semicolons. The software will find a host of serious and minor errors; for instance, Agitator seemed worried about catching general exceptions, wanting the code to spell out the exact type of the exception being caught.

This was a relatively small detail, but other messages were eye-opening. For example, one method was not using the equals method to test whether two strings are the same, a mistake akin to a writer substituting “they’re” for “their.” Another constructor was calling nonstatic, nonfinal methods, a process that can cause errors when the object is not completely initialized. (This is an area where Java’s semantics need a lot of work.)

I really enjoyed the discipline of the coding rules. Dyslexic-style mistakes happen, and structural testing is the only way to catch it. Although the implementation is gorgeous, the proliferation of detail can be a bit confusing if you’re not careful. For example, each method is marked up with little numbers that indicate how many times the line was executed by Agitator -- one line was called 144 times and another was called 247 times. In the same vein, the tool for drilling down into the code and seeing what these random values generated is impressive, filled with obsessive detail.

In my code, these random tests seemed to find many null-pointer errors that never appear until the code is shipped. Its random-number generator would root out the poorly written lines that passed my own tests, smoothly addressing one of the major problems with unit tests. When we write the tests ourselves, most of our code will pass those tests because they encode all of the problems that we can predict. Agitator, however, doesn’t have that bias and can pull out the errors that we didn’t think to test for, such as the aforementioned null pointers.

This lack of bias, however, can be problematic. Some methods don’t work correctly when fed any Agitator’s values; for example, your code may check for null pointers at the beginning, but there’s no reason why every subroutine should check for a null value. If you want your code testing to focus on the right values, you need to opt it out or write your own value generator, something made easier by one of Agitator 3.0’s new features, Domain Experts.

Domain Experts present developers with a set of predefined “experts” that handle many standard areas of Java coding such as servlets, EJBs, Log4J, or Struts applications, making it simpler to encode this information and even control how the random numbers are chosen. In the case of the Struts version, it will pull information out of the config.xml to generate actions for testing the application. These new enhancements help immensely, but you will still need to customize some of the tests for full code coverage.

After putting Agitator 3.0 through its paces, I can certainly say I don’t know of any other tools that do anything close to what it accomplishes. Developers will enjoy peppering their code with tests to check its strength, but I also think the product will be popular with managers who need to watch over a medium or large group of programmers. Agitator will allow them to download and perform a number of relatively good tests on the code each week, day, or hour. These tests won’t 100 percent guarantee that the code is free of errors -- no testing tool can -- but the discipline will be good for everyone, and Agitator 3.0 will definitely catch some of the mistakes you didn’t even know existed.

InfoWorld Scorecard
Ease of use (20.0%)
Documentation (10.0%)
Scalability (20.0%)
Capability (20.0%)
Value (10.0%)
Performance (20.0%)
Overall Score (100%)
Agitar Agitator 3.0 9.0 8.0 8.0 9.0 9.0 9.0 8.7
Mobile Security Insider: iOS vs. Android vs. BlackBerry vs. Windows Phone
Recommended
Join the discussion
Be the first to comment on this article. Our Commenting Policies