What AI can really do for your business (and what it can’t)

Artificial intelligence, machine learning, and deep learning are no silver bullets. A CIO explains what every business should know before investing in AI

What AI can really do for your business (and what it can’t)

How can you tell whether an emerging technology such as artificial intelligence is worth investing time into when there is so much hype being published daily? We’re all enamored by some of the amazing results such as AlphaGo beating the champion Go player, advances in autonomous vehicles, the voice recognition being performed by Alexa and Cortana, and the image recognition being performed by Google Photos, Amazon Rekognition, and other photo-sharing applications.

When big, technically strong companies like Google, Amazon, Microsoft, IBM, and Apple show success with a new technology and the media glorifies it, businesses often believe these technologies are available for their own use. But is it true? And if so, where is it true?

This is the type of question CIOs think about every time a new technology starts becoming mainstream:

  • To a CIO, is it a technology that we need to invest in, research, pay attention to, or ignore? How do we explain to our business leaders where the technology has applicability to the business and whether it represents a competitive opportunity or a potential threat?
  • To the more inquisitive employees, how do we simplify what the technology does in understandable terms and separate out the hype, today’s reality, and its future potential?
  • When select employees on the staff show interest in exploring these technologies, should we be supportive, what problem should we steer them toward, and what aspects of the technology should they invest time in learning?
  • When vendors show up marketing the facts that their capabilities are driven by the emerging technology and that they have expert PhDs on their staff supporting the product’s development, how do we evaluate what has real business potential versus services that are too early to leverage versus others that are really hype, not substance?

What artificial intelligence really is, and how it got there

AI technology has been around for some time, but to me it got its big start in 1968-69 when the SHRDLU natural language processing (NLP) system came out, research papers on perceptrons and backpropagation were published, and the world became aware of AI through HAL in 2001: A Space Odyssey. The next major breakthroughs can be pinned to the late 1980s with the use of back propagation in learning algorithms and then their application to problems like handwriting recognition. AI took on large scale challenges in the late 1990s with the first chatbot (ALICE) and Deep Blue beating Garry Kasparov, the world chess champion.

I got my first hands-on experience with AI in the 1990s. In graduate school at the University of Arizona, several of us were programming neural networks in C to solve image-recognition problems in medical, astronomy, and other research areas. We experimented with various learning algorithms, techniques to solve optimization problems, and methods to make decisions around imprecise data.

If we were doing neural networks, we programmed the perceptron’s math by hand, then looped through the layers of the network to produce output, then looped backward to apply the backpropagation algorithms to adjust the network. We then waited long periods of time for the system to stabilize its output.

When early results failed, we were never sure if we were applying the wrong learning algorithms, hadn’t tuned our network optimally for the problem we were trying to solve, or simply had programming errors in the perceptron or backpropagation algorithms.

Flash-forward to today and it’s easy to see why there’s an exponential leap in AI results over the last several years thanks to several advances.

First, there’s cloud computing, which enables running large neural networks on a cluster of machines. Instead of looping through perceptrons one at a time and working with only one or two network layers, computation is distributed across a large array of computing nodes. This is enabling deep learning algorithms, which are essentially neural networks with a large number of nodes and layers that enable processing of large-scale problems in reasonable amounts of time.

Second, there’s the emergence of commercial and open source libraries and services like TensorFlow, Caffe, Apache MXNet, and other services providing data scientists and software developers the tools to apply machine learning and deep learning algorithms to their data sets without having to program the underlying mathematics or enable parallel computing. Future AI applications will be driven by AI on a chip or board driven by the innovation and competition among Nvidia, Intel, AMD, and others.

Don’t confuse AI hype with AI realities

Once you have a grasp of history and an understanding of the technology, it’s often useful to review where an emerging technology is in its life cycle.

Gartner has machine learning and deep learning at the peak of their hype cycles and forecasts that “general AI” (AI applied to any intelligence problem) will emerge after 2020. Venture Scanner shows that about two-thirds of startup funding in AI is going to early rounds (seed, Series A, and Series B rounds), indicating that many companies selling or marketing AI solutions are early in their product development and sales cycles. McKinsey states that only 20 percent of AI-aware firms are adopting AI and that more than 50 percent of AI investments are coming from tech giants and startups versus businesses that happen to use technology.

Those stats should give any CIO or business executive a pause before jumping into AI investments with both feet. Although AI is certainly demonstrating a lot of promise, the commercial application of these algorithms at scale is still relatively young.

And the early winners are big tech companies and startups with the talent, funding, and patience to experiment with new technologies. Most enterprises and medium businesses simply don’t have these luxuries and are just starting their AI journeys.

AI is a highly disruptive technology, so you should not ignore it. Just proceed judiciously and avoid getting hypnotized by the AI hype.

For example, when voice becomes a better human machine interface than screens and keyboards for some applications, or as chatbots become smarter and faster than human customer service agents, many businesses will have to upgrade their user experiences with these technologies.

Likewise, when deep learning algorithms get better at detecting fraud, risky transactions, or security threats, enterprises will have to be ready to use these approaches.

And when we begin to pull intelligence from spoken language, audio, and video as effectively as we can with more structured data, using these capabilities will provide significant competitive advantages to a large array of businesses.

When is the operative word.

Most businesses should aim to be fast followers, not early adopters. That means paying attention and even experimenting with AI in these early days, but waiting to rely on AI until the technology is sufficiently mature, proven, and able to deliver at scale.

As you learn about AI capabilities, look for tools and practical example to help evaluate applications of AI and their maturity. Examples include:

1 2 Page 1
Page 1 of 2
How to choose a low-code development platform