JavaScript tutorial: Add speech recognition to your web app

Harnessing the power of voice commands in a React map explorer app with the annyang JavaScript library

JavaScript tutorial: Add speech recognition to your web app

While browsers are marching toward supporting speech recognition and more futuristic capabilities, web application developers are typically constrained to the keyboard and mouse. But what if we could augment keyboard and mouse interactions with other modes of interaction, like voice commands or hand positions?

In this series of posts, we’ll build up a basic map explorer with multimodal interactions. First up are voice commands. However, we’ll first need to lay out the structure of our app before we can incorporate any commands.

Our app, bootstrapped with create-react-app, will be a full-screen map powered by the React components for Leaflet.js. After running create-react-app, yarn add leaflet, and yarn add react-leaflet, we’ll open up our App component and define our Map component:

import React, { Component } from 'react';
import { Map, TileLayer } from 'react-leaflet'
import './App.css';
class App extends Component {
  state = {
    center: [41.878099, -87.648116],
    zoom: 12,
  updateViewport = (viewport) => {
      zoom: viewport.zoom,
  render() {
    const {
    } = this.state;
    return (
      <div className="App">
          style={{height: '100%', width: '100%'}}
            attribution="&copy; <a href=&quot;;>OpenStreetMap</a> contributors"
export default App;

The App component is a stateful component that keeps track of the center and zoom properties, passing them into the Map component. When the user interacts with the maps via the built-in mouse and keyboard interactions, we’re notified to update our state with the new center and zoom levels.

With a full-screen component defined, our app looks like the following:

To continue reading this article register now

How to choose a low-code development platform