Shifting from machine logic to intelligent interfaces

While machine learning introduces new approaches to software, AI’s more transformative impact will be in the way we interface with each other, businesses, and the world around us

futuristic user interface - heads-up display
Thinkstock

The most profound advancements in technology are often less about technology and more about new interfaces replacing old ones. The car offered an entirely new interface to mobility than a horse or a bike; the telephone did the same for communications; databases replaced filing cabinets; the internet and, later, mobile devices have drastically altered (if not replaced) print media.

Artificial intelligence is ushering in a broader shift than merely faster logic and data processing; it is enabling altogether new user interfaces and experiences. While AI takes myriad forms, virtually every application can be rolled into one of three buckets: machine capabilities involving vision, language, or analysis.

A growing range of technologies are powering these capabilities, including machine learning, deep learning, natural language processing, and computer vision. But it’s often combinations of the above that forge altogether new ways information systems are mimicking biological systems and human-like abilities.

Consider the following interfaces that are powered by various AI technologies:

  • Voice recognition 

  • Facial recognition
  • Emotion recognition

  • Hand recognition
  • Gesture recognition 

  • Iris or retina recognition
  • Gaze-tracking
  • Gait (walking) recognition
  • Social robots
  • In-store robots or avatars

  • Autonomous vehicles 

  • Drones 

  • Hearables (with conversation-based agents)
  • Virtual agents
  • Augmented reality and mixed reality
  • Virtual reality 

  • Computer vision and three-dimensional (3D) modeling 

  • Virtual assistants 

  • Tactile, texture, impact, grip recognition
  • Language translation services 


AI is driving a shift in what can be digitized

What constitutes a data-emitting event in the physical world is expanding. Sensors and networking technology expanded digital from personal computers to mobile to objects and infrastructure; AI is digitizing the modalities we use to interact.

The list above all represent alternatives to current modes of interaction. They also represent diverse efforts aimed at scaling how we digitally interface with the physical world.

For example, biometric authentication (using facial, iris, or other anatomically based recognition) is widely considered an improvement to current password/PIN security vulnerabilities, given the relative difficulty of replication. Indeed, millions of smartphones are already outfitted with fingerprint and increasingly facial recognition software. Consider how biometric authentication will impact real-world experiences such as payment, accessing assets like a home or car, going through airport security, health care or records access, and marketing and emotion recognition.

Voice and natural language understanding (NLU) represent another sea change in our expectations of experience. Beyond the convenience of Siri or Alexa, voice is quickly becoming a ubiquitous and seamless command, control, and information access modality in the enterprise, in industrial environments, in cars, for the disabled, and beyond.

Consider how augmented information overlay could impact social interactions; how it will expedite decision-making; how in enterprise environments, it is already being implemented to accelerate repair and maintenance. Powered by augmented and mixed reality, image and object recognition, and potentially simultaneous localization and mapping, the ability to augment our vision with real-time context will usher in a new type of reliance on technology. 

From intelligent interfaces to invisible interfaces

AI and its convergence with IoT and infrastructure technologies can also render technological interface altogether invisible! Take Amazon Go, the frictionless grocery store that eliminates the checkout experience. While a complex configuration of sensor data fusion, shelf weights, cameras, computer vision, deep learning, mobile and POS integration, the experience for shoppers feels largely tech-free. Shoppers walk in, pick their desired items from the shelf, and simply walk out.

In contrast to many explicit user interface modalities, like gesture or voice recognition, AI also underlies numerous use cases in which overt interactions in one environment—in-home, in-store, with a robot, while driving, etc.—are then incorporated to inform interactions such as automation, security, or advertising elsewhere.

Diverse impacts and implications underscore each of these emerging interfaces. Never mind what some predict will eventually become embedded within us to augment our knowledge, recall memories, or cognitively control our environment. We may still be far from commercially viable brain-machine interface, but we can be sure of one thing. AI represents more than machine logic, it represents the future of how we interface with each other, businesses, and the world around us.

This article is published as part of the IDG Contributor Network. Want to Join?