Building the virtual assistant everyone wants

The four key areas developers and testers need to master to create the high-quality virtual assistants users expect

robots humans working together ts
Thinkstock

Whether you’re integrating with Alexa or adding voice capabilities to native apps, virtual assistants are becoming a staple of the modern customer experience. With a projected 35.6 million people using voice-activated assistants at least once a month this year, companies are itching to get a piece of the voice market.

Unsurprisingly, this new wave of innovation comes with some lofty expectations. You typically don’t hear about virtual assistants that work perfectly, because meeting expectations doesn’t make headlines. But when you’re bot falls short, that’s when people start listening.

Branded assistants that let you order food or check your bank account without lifting a finger bring a level of convenience untouchable by traditional apps. But consumers now expect these human-like interactions to accomplish meaningful tasks. Whether it’s getting your favorite meal delivered to your doorstep or transferring your paycheck to your savings account, even the smallest of flaws throughout these experiences can result in a damaged reputation, or worse, a lost customer.

Lately, digital quality has become top of mind in the business realm. But as companies start to tap into the voice space, quality gets harder and harder to maintain. Current open source frameworks that support web and app development are failing to support voice use cases, leaving developers and testers at a loss for proper resources. Voice-enabled UIs are essentially establishing a new way for people and applications to communicate. Producing a flawless customer experience through this medium comes with an often overwhelming set of challenges.

Rising challenges in uncharted territory

On the development side, teams are no longer just testing smartphone apps for bugs and glitches (which is challenging on its own), but testing complex voice interfaces to ensure seamless, natural conversations. Embedding your apps with natural language processing capabilities is only half the battle. Once humanized conversations are enabled, your bot needs to translate speech to text and do it perfectly every time. Preparing an app to converse with people from across the globe—accents, stutters, background noises, and all—creates an additional layer of complexity that traditional app developers have yet to master.

If you’ve ever asked Alexa to turn on the lights and she opens the garage instead, you know how crucial quality really is. According to Botanalytics, 40 percent of bot users disengage after one interaction. That puts a ton of pressure on companies to meet user expectations from the get go—which is easier said than done.

Virtual assistants need to accommodate a broad language dictionary and react appropriately to various sounds, slang, acronyms, or verbal shortcuts thrown their way. Tacking on these additional audio-based scenarios to your existing load texts across platforms adds time and complexity to the testing process. Companies looking to tap into the voice market have a lot of work to do before achieving the quality experience that people are searching for, and will need the proper tools and processes to get the job done right.

Building the virtual assistant users expect

Before bringing your virtual assistant to fruition, there are four key areas that developers and testers need to master:

  • Extend test lab coverage to include virtual assistants: With virtual assistants just recently entering the market, finding an automated test lab that supports your development environment isn’t easy. A proper lab should automate testing across scenarios, including text and speech entry (both clear speech and imperfect speech), and validate the response (in both text and audio) to ensure the real-world conversation will flow smoothly. Beyond functionality testing, your lab should be able to measure a bot’s responsiveness.
  • Test prioritization and scale: With the growing number of test cases you need to execute at once, you should have a facility that can prioritize tests across resources, and parallelize the execution at scale. For example, you should test for commonly mispronounced words with street sounds in the background before testing a thick German accent using slang in a noisy bar. By prioritizing scenarios experienced by a large portion of your users first, you maximize the virtual assistants potential for success.
  • Big data reporting and fast feedback: With the growth of test cases and increase in release frequency, your team will have access to a significant amounts of data. To leverage this data effectively, it needs to be analyzed and turned into actionable insight. Finding a testing solution that accelerates root cause analysis, grouping and matching of failures, and extends visibility to the entire team will both reduce your bot’s time to market, and ensure that it works properly in front of your users.
  • Devops in mind: In today’s digital landscape, having an agile development team is essential. Being aware of your virtual assistant’s performance throughout the development cycle puts your team in a strategic position to resolve issues in real time, and help optimize the next version of the bot accordingly. Though agility is essential for development across the board, the level of complexity involved in building virtual assistants makes unifying quality tools throughout production all the more valuable.

Rising to the top of the virtual assistant landscape

Virtual assistants are in a prime position to change the way people interact with apps, phones, and connected devices for the better. As technology improves, voice-activated assistants will only get smarter. The more developers experiment with virtual assistants now, the closer they’ll be to building the convenient virtual assistant that users are searching for.

Copyright © 2017 IDG Communications, Inc.

How to choose a low-code development platform