Training and bug reporting should be part of every app

In an ideal world, the ability to bridge the gap between user frustration and developer attempts to fix problems would be part of the build

Training and bug reporting should be part of every app

People aren't good at reporting software bugs. Moreover, software isn't good at self-reporting -- particularly when it's browser-based software. These conversations happen every day:

User: The new Frobinator isn't working anymore.
Developer: What did you do, what did you expect to happen, and what did happen?
User: I clicked on the Frobinate button, I expected it to work, it didn't.

We've all been there. Frobinate, let's say, was supposed to populate a widget with data. Maybe it found no data at all; maybe it found the wrong data. Maybe the widget never appeared. Whatever the problem may be, we have to be able to reproduce it in order to address it.

That's especially challenging when the user might be running Chrome or Safari or Firefox or Internet Explorer, and when the application is commingling with others in the chaotic environment we call the modern Web.

I've been spending a lot of time lately learning how to use Selenium WebDriver, the premier automation toolkit for functional testing of Web software. The WebDriver API enables test scripts to simulate what a user would do: Click the Frobinate button, wait for the data widget to appear, and look for expected values in the data displayed on the page.

My experience so far has been equal parts joy and frustration. Joy: I actually can now run a suite of tests that exercise our app using different versions of our Chrome extension as well as other browsers. Frustration: It's really hard! WebDriver aims to add to the browser a capability that was never designed in. That's always going to be an uphill slog.

Functional tests that simulate what users do and see will, I hope, usefully complement our unit testing. But unit tests will always be the foundation of our testing strategy. If we can augment them with scripts that replay and evaluate what users do and see, that'll be icing on the cake.

Whenever I engage with this kind of automation technology, though, I can't help but imagine other uses. Here are three of them:

1. Auto-updated screencasts. Screencasts are a great way to show software in action. But they are frozen in time and software isn't. If you had a script that drove the Frobination scenario, you could regenerate the screencast as part of the build process.

2. Guided training. Documentation is a last resort. People learn by doing. Ideally, your software enables them to discover its uses as they use it. In practice, there's always a need for guidance. Scripted interactions are a really effective form of training. Like screencasts, they are hard to create and maintain. Could you add them to the build process, too?

3. Bug reporting. Now we come full circle. A user clicks the Frobinate button ,and nothing happens, or the wrong thing happens.

User: The new Frobinator isn't working anymore.
Developer: OK, click Replay and tell me what happens.
User: It still isn't working.
Developer: OK, click Send and I'll get back to you.

This is shameless hand-waving, of course. We may never arrive in this happy place. Meanwhile, we wrestle with Frobinator issues the old-fashioned way. But we can dream, can't we?