Capturing user experience closes the feedback loop

Portable labs offer media-rich developer tools to tap ‘beginner’s mind’

If you want to make software developers squirm, force them to watch people using their software.

At the Eighth International Python Conference, HCI (human/computer interaction) expert Dr. Randy Pausch talked about doing just that for his educational software project, Alice. Patrick Phalen, a developer who attended Pausch’s talk, recalls: “I vividly remember laughing out loud when Randy described the extreme methods they used to get their users to adopt beginner’s mind. They required developers to sit on their hands in chairs behind newbies to observe them gaining familiarity with Alice. They were not allowed to reach over and commandeer the mouse or keyboard.”

Developers who possess deep but tacit knowledge of complex hardware and software environments are notoriously unable to project themselves into the beginner’s mind. Observation is the only way to bridge the gap, but Pausch’s innovative exercise notwithstanding, that’s easier said than done. It’s expensive to rent a so-called “fixed lab” and to bring people there to conduct a formal study. Even commercial developers can’t do this routinely; many enterprise developers never get the opportunity to see users interact with their wares.

Portable labs -- available from Alucid Solution, Ovo Studios, and UserWorks, among others -- are a cheaper and more convenient alternative to fixed labs. These are typically suitcases packed with gear for capturing and editing videos of both onscreen activities and the users performing them.

It was inevitable that this cluster of capabilities would intersect with increasingly powerful and media-capable PCs, and that’s exactly what is happening.

TechSmith’s Camtasia Studio is one of the leading tools used to capture and edit videos of screen activities, and the company recently released Morae, a Windows-based suite of tools for capturing and analyzing Windows-based software experiences. UsersFirst, another leading vendor in the market, is preparing to beta-test VisualMark, a similar suite. The VisualMark observation and analysis tools run on the Macintosh and will be used initially to capture and analyze Windows-based experiences. Because the product is based on the VNC (Virtual Network Computing) remote viewer, it will also be able to observe both Linux and Mac desktops.

There are shoestring alternatives to such tools. Remote screen-sharing coupled with a phone call can yield valuable insight into user experience. When problem scenarios can’t be reproduced during a live remote session, conventional screen video tools are another option.

Scaled-down alternative

Windows Media Encoder 9, for example, which is available free from Microsoft, offers a capable but little-known video screen-capture capability. You can use it to record screen activity -- plus voice-over -- to a compact WMV (Windows Media Video) file that can be easily exchanged or viewed on the Web. This is a great way to produce simple training videos.

It can also be useful for tech support and basic usability observation. But for more formal observation, you’ll want a solution that’s less intrusive, captures video as well as audio, and supports analysis and editing of the raw material.

With UsersFirst’s VisualMark, screen and video capture is accomplished without installing any software on the target machine. The tools run directly from a CD-ROM on the target machine, and they relay streams to the Mac-based portable lab. TechSmith’s Morae requires installation of a recorder program, which can then be started manually by the user or be configured to begin recording at a certain time or in response to a system event such as an application launch.

VisualMark relays both streams -- screen activity and video of the user -- to the portable lab for live observation. According to Pete Gordon, the developer of VisualMark, this method most closely approximates the fixed-lab environment in which observers may be watching subjects through a one-way mirror.

Morae’s remote viewer enables the observer to watch screen activity on the target machine but not live video of the user. Instead, it captures both streams to the user’s drive (or a LAN drive) for later analysis. Morae’s product manager, Shane Lovellette, notes that an out-of-band speakerphone can be used to relay at least the voice channel in a situation where the user is invited to think aloud while performing a scenario.

Both products enable the usability analyst to identify key interaction sequences, annotate them, and distill them into a series of highlights. The final output is a reel of short clips that shows how and why users go astray, and it provides a framework for discussing alternate scenarios.

Multiple search capacity

In both cases, markers with annotations can be inserted during or after the recording session. Indexing the streams in this way prepares them for editing. Morae, in addition, captures window, mouse, keyboard, and Web-page events in a synchronized way. An analyst trying to find the point at which a user confirms a dialog box can, for example, search for the “OK” event. There’s also a companion text-search feature that enables the analyst to jump to the point at which a user types, for example, a ZIP code.

Morae’s “rich recording” technology not only instruments the streams for searching, it can also count the number of mouse clicks or Web-page views required to complete a task. As a result, usability analysts “can focus more on the qualitative side as they’re observing tests,” TechSmith’s Lovellette says.

Current trends in software development will, in theory, make it even easier to correlate the different mental models of users and developers. Mouse clicks and Windows events are useful reference points, but they correlate weakly to the scenarios that software implements and that users perform. In a services-oriented architecture, however, high-level scenarios need not be inferred from low-level events -- they can be seen directly in transparent XML pipelines.

Similarly, a user’s mental state might be observed directly rather than having to be inferred from facial expressions and tone of voice. When a user experiences stress, for example, a synchronized biofeedback monitor could pinpoint its cause. “If there’s a heart rate spike,” Lovellette suggests, “you’d look for that point in the interaction and see what stimulus caused it.”

It’s fun to speculate about such possibilities, but what’s already clear is that by streamlining a formerly cumbersome process, this new breed of capture and analysis tools can help weave user research into the normal iterative flow of software development.

“Usability testing has been very hard to do,” says Harley Manning, vice president of research at Forrester. “Anything that lowers the barrier makes it more likely that you’ll do it -- or if you’re already doing it, then more likely you’ll do it more often.”

What these tools won’t do, Manning cautions, is transform people with no experience performing usability testing into human-factor experts.

One key aspect of the discipline is careful selection of test subjects. If someone represents an edge case rather than a core constituency, turning on a camera and screen recorder could do more harm than good. “You’re liable to put in all sorts of design ‘solutions’ that in fact make the product harder to use for the majority,” Manning says.

For developers who rarely get to see people using their software, any opportunity to observe users is likely to provide valuable insight. Arguably such observation can, and should, occur throughout the software life cycle. A software team will often nominate one member to advocate for the user. Equipped with low-cost and easy-to-use recording tools, that team member can capture users’ experiences with alpha, beta, or production software. Ideally the material will be edited down to highlights, but even raw footage can be helpful.

It’s still hard for developers to watch this stuff. We have had a tendency to spare them the pain -- and to sacrifice the gain -- because connecting developers to users in this way has not often been practical. This new generation of tools aims to close that critical feedback loop, thereby helping developers figure out what ease-of-use really means to users.

Copyright © 2004 IDG Communications, Inc.