From Westworld to Terminator, it’s no secret that Hollywood believes AI is out to get us and that it’s only a matter of time before we’re faced with a robot gone rogue. Therefore, it’s no surprise that when, earlier this summer, two Facebook bots began chatting with each other seemingly in their own language, the internet exploded with predictions of our demise.
It’s not that simple, though. Despite speculation that we were just one step away from an AI-gone-rogue crisis, as the chief architect for a major ERP company working on AI and chatbot technologies, I believe we still have a long way to go before we need to worry about a bot-pocalypse—and that bots are still performing exactly as we would expect. Here’s where we stand, where we’re headed, and what CIOs and business leaders need to know about bots today.
Where we stand with bots and AI
Currently, what we refer to as “chatbots” are, at their most basic level, just another way for us to interact with an application, akin to a graphical user interface. These bots can serve up information from multiple sources upon request and can initiate actions such as placing an order or creating a report. On the consumer side, bots like Siri and Alexa can carry out basic requests and are even programmed to respond humorously when prompted by specific questions.
However, these actions are no different than what we can already achieve using software – the process is just packaged into a natural language interface (NLI) or conversational interface, which makes it easier to find information and complete tasks. However, it also makes it easier for us to suspect sinister intentions when a program does anything out of the ordinary.
Looking back at Facebook’s negotiation chatbots, the fact that the bots started communicating in their “own” language is far less ominous than it sounds. According to a report by Facebook, the bots weren’t given an incentive to communicate in English and therefore started using a shorthand reminiscent of zeros and ones that they found to be more efficient than the English language. The resulting dialogue has been likened to a Skynet-esque conversation in bot code, but it can actually be explained by this simple oversight in the design. Without an incentive to communicate in English, the bots were simply carrying out their assigned task using the most efficient means possible—and “efficiency” was a directive that they were programmed to achieve.
Bots in context
Let’s take a step back and look at bots in the greater context of human inventions. That’s a great deal of tech to consider, to be sure. However, every invention, no matter how simple or complex, has one thing in common: it has merely been a tool to gain efficiency by optimizing existing processes. AI is no different. With our current capabilities, we can only construct applications that either mimic what we know or automate what we do. As computers become more powerful, we can create faster and more advanced algorithms that allow us to optimise processes and complete more and more complex tasks, but we still cannot create a program that automates these tasks completely.
As of today, all AI is simply pattern matching—and we generally understand that the brain is the most complex pattern matching “computer” that exists. Therefore, data and machine learning is hugely important, as they’re key to creating self-learning systems. But self-learning systems and algorithms only operate and optimise within a very narrow domain. For example, if you have an algorithm for learning to recognise cats, then that same algorithm can’t—without new learning—suddenly decide that one of the pictures wasn’t of a cat but a dog, and then intelligently be capable of distinguishing cats from dogs. Such an algorithm would be a deviation and not the same. For example, AlphaGo—the reigning champion of the board game Go—cannot suddenly become the world’s best chess player.
To date, we have yet to build software application that can go beyond mimicking tasks we already understand. Therefore, AI won’t become a threat until we completely understand how the brain works, and thus understand what consciousness really is.
Putting safeguards in place
When considering bots in the business setting, one element to keep in mind is that enterprise bots are constructed with very specific use cases in mind. Today, they’re mostly rule-based, operate within a very narrow context, and not built to develop a data-driven personality, like Tay and Zo.
Of course, chatbots can be packed with some pretty powerful underlying technology such as predictive analytics and machine learning. When augmented with these technologies, enterprise bots can learn by automatically enhancing their vocabulary to understand different commands. For example, Wanda, the digital assistant we built for Unit4, not only can schedule meetings, request time off, and book travel—she can also provide insight into the current state of a project, predictions around its completion time, and more. She can also learn from experience that words such as “procure,” “purchase,” “buy,” “acquire,” “obtain,” etc. all mean the same thing and thus require the same action. She’s intelligent, but she’s not autonomous from humans.
As business leaders, what we need to be careful of, however, is feeding incorrect information into such a system. If you feed any machine-learning-driven algorithm incorrect data, you get incorrect behavior. Prior to rolling out such solutions, business leaders should fully understand the implications of letting data-driven bots/agents loose in their infrastructure. They need to ensure that there are adequate safeguards available, that thresholds are in place for when an algorithm makes a decision, and that extensive logging systems ensure traceability and visibility into any decision made by algorithms.
Most enterprise solutions already come with standard safeguards that are constructed to prevent human users from “going rogue.” For example, enterprise bots might include a safeguard that prevents a bot from ordering laptops that don’t comply with standards or are overly expensive. All actions that enterprise software takes conform to specific rules that ensure that anything that could have an impact on the company runs through very specific approval processes. However, before implementing an AI system, CIOs should demand that software vendors can thoroughly explain how the algorithms work and provide extensive information on how decisions are reached.
What should we be watching out for?
Not all concerns surrounding AI are unfounded, however. As current algorithms base decisions on data and patterns, it’s not completely inconceivable that an algorithm could decide that all humans are foes, with disastrous consequences. However, we would be able to turn the software off and adjust it, just like what was done with the “rogue” Facebook bots. Science fiction like scenarios like Skynet, where the AI intelligently disables all safe guards would require self-conscious, goals, motives and drive—human elements that, I believe, we are very far from being capable of mimicking.
The possibility of a “rogue” chatbot is even less likely in the enterprise realm, where bots are more likely to carry out rote tasks. However, we should be closely monitoring and putting safe-guards in place to govern all AI systems, not because of concerns of rogue decisions by bots, but because of the lack of human oversight that could lead to errors. How careful you should be when implementing enterprise chatbots at your organization depends on the industry you’re in and the amount of damage a chatbot could cause if it was fed incorrect information or made an incorrect decision. Like any technology, we’re continuously putting additional safety mechanisms in place, and bots become more failsafe as time moves on. In the meantime, however, CIOs should weigh the benefits and possible risks of using AI and chatbot technologies—no matter how small these risks may be.
In conclusion, fears of rogue bots are, at present, unfounded as machine learning capabilities today are nowhere close to being able to achieve true AI. Until we come closer to understanding the human brain, we’re not even close to constructing systems that could be a general threat to humanity.
This story, "Chief architect perspective: What leaders need to know about the rise of bots" was originally published by IDG Connect.