Agile developers test, but aren’t testers

What’s the difference between a developer, a tester, and a software development engineer in test (SDET)?

This isn’t a lead-in to a joke. In fact, it’s a very serious question that’s being debated across the software development community. Agile adoption has blurred the historical distinction between testers and developers, and that’s a good thing. When all goes well, developers are testing more and taking a greater responsibility for quality. Testers start designing tests early in each sprint and are constantly in the loop, thanks to colocation and daily standups. If all goes well, fewer defects are introduced into the code base, and the role of tester is elevated from finding the manifestations of developers’ mistakes to protecting the user experience. 

However, there’s a great debate stirring about how much testing responsibility should be transferred to developers—and how important it is for testers to know programming. I think that both proposed “mergers” (developers becoming testers and testers becoming programmers) threaten to undermine the goals of agile. Here’s why:

1. Beyond GAFAs, asking developers to be testers impacts innovation velocity

If you’re Google, Apple, Facebook, or Amazon (GAFA), you’ll always have a constant supply of top talent ready to help you get innovations to the market at lightning speed. If you need to accelerate existing projects or launch new ones, you can pick and choose among the world’s top developers. You can even get away with placing top-tier developers in a software development engineer in test (SDET) role. Many eager developers will bear this not-so-ideal position in hopes of one day becoming a full-fledged developer at their dream employer.

However, in large enterprises, you usually don’t have the luxury of top-tier developers knocking on your door. Attracting and retaining valuable developers is an ongoing struggle. As a result, it’s hard enough to satisfy the business’s insatiable demand for software when all your potential developers are focused on developing. You simply can’t afford to have developers focused on the high-level testing tasks that professional testers can handle just as well—if not better.

2. The leanest test automation approaches don’t require programming skills

Development methods have already become much leaner and more lightweight to help teams meet expectations for more software, faster. Testing technologies have also advanced—with lightweight scriptless approaches architected for the rapid change endemic to agile. However, many teams are still clinging to the mindset that test automation requires the high-maintenance, script-based testing approach that was introduced decades ago—but is still delivering underwhelming results (20 percent automation rates, at best). Across virtually all industries, people embrace software that enables advanced degrees of automation by abstracting the level of complexity. It’s time for the software testing industry to accept this as well.

In our research at Tricentis in enterprise environments across various industries, we’ve found that scriptless approaches yield significantly greater degrees of sustainable automation than scripted approaches. Moreover, they also remove the most common testing bottlenecks that trouble agile teams because (1) they broaden the range of team members who can contribute to testing, (2) they’re easier to keep in sync with evolving applications due to high reusability and modularity, and (3) they relieve you from having to maintain a test code base designed solely to test your actual code base.

3. You’ll fail faster with both developers and testers testing

I guarantee that if you have both developers and professional testers testing, you will expose critical issues faster—and we’re all familiar with the curve that shows how the time, cost and effort of resolving defects rises exponentially over time. Detecting each defect as soon as it’s feasible to do so has a tremendous impact on in-sprint velocity, as well as preventing field-reported defects from derailing future sprints.

“Development testing” is ideal for exposing coding errors. It involves checking the functionality and stability of the code that’s written to implement a user story. This is critical. If some low-level mistake entered the code base (for a simplistic example, a multiplier with a misplaced decimal point), it’s much more efficient to find and diagnose that problem with a direct unit test than an end-to-end test that checks functionality from the user perspective.

However, if your testing is primarily comprised of bottom-up tests designed by engineers, you’re likely to overlook critical issues that your users probably will not overlook.

Does the new functionality work seamlessly within broader end-to-end transactions? If the user exercises the application in ways that the developers didn’t anticipate, will the application respond in a reasonable manner? Does your functionality properly interact with the full range of behavior that dependencies might exhibit? With professional testers rigorously exercising core functionality in the context of a realistic business transaction (and from the top-down perspective of the user), you will inevitably discover a host of issues that would otherwise go unnoticed until production.

When developers test in concert with professional testers, you’ll get a much sharper understanding of the business risks associated with the release. You’ll also gain the opportunity to resolve high-risk issues before your users ever encounter them. This is the ultimate goal of testing—and it requires more collaboration among roles, not more developer/tester controversy.

Copyright © 2018 IDG Communications, Inc.