Artificial intelligence as a term dates back at least to the 1950s to denote a computer’s ability to learn; as a theoretical concept or artistic theme it goes back further. Today it is very much back in vogue in part perhaps because old algorithms are becoming more popular as a way to automate more and more popular projects (autonomous vehicles and the internet of things among them) and new algorithms are being created and applied in novel ways.
But like any popular meme, AI has also been kidnapped by the media and marketing communications industries to the point that the term is sometimes used in a slapdash way. To get a grip on where AI is today, I contacted a selection of experts in the field. The following is an edited version of their responses to my questions.
AI has been around for decades. Why do you think the term is getting so much airplay and attention today?
Jurgi Camblong, CEO of Sophia Genetics, a Swiss specialist in data-driven medicine, simply says: “Because it’s happening! In health care, AI is already routine within hospitals, delivering concrete benefits to patients every day and saving lives.”
Sohrob Kazerounian, data scientist at security threat monitoring firm Vectra Networks, takes a broader view: “Firstly, access to tools and systems that use AI is far greater than at any previous point in time. Future visions of AI in the 1960s were always impossibly far away and inaccessible to all but the economic and political elite. However, today, what we would have once thought of as the basic substrates of any AI system (for example, the ability to perceive arbitrary speech, visual inputs at near-human levels, network monitoring, cybersecurity detection, and interact with humans through natural language, both understanding and responding to queries) are now readily available to the population, often at the low end of consumer electronics pricing. That alone has transformed the landscape for AI perception and adoption.”
Andrew Joint, managing partner technology law firm, Kemp Little, sees a combination of factors: “It feels like the technology has started to catch up with the years of science fiction and future-gazing about its predicted use. The combination of vastly improved processor speeds, the rise of the cloud and big data and the development of the AI algorithms themselves make the conditions seem ripe for AI to begin to flourish. We are now seeing everyday devices in both the home and office which use (admittedly weak) forms of AI. That everyday use is beginning to generate trust in the tools, and benefits that we can see and appreciate.”
Jason Maynard, director of data and analytics at service desk firm Zendesk, says AI is hot for good reason. “With its promise of automating mundane tasks as well as offering creative insight, industries in every sector from banking to healthcare are reaping the benefits. It’s a new and intuitive interface for the existing world. Chatbots and other AI platforms like virtual assistants continue to become more proficient at dealing with enquiries, and in some cases, pre-empting customer enquiries with predictive analytics and proactive communication. The benefit of this type of technology is that—even in circumstances where customer service requests are complex—a growing history of accurate decisions will allow companies to put more confidence in the automated system it provides for customers, saving time and money in the long run.”
Suman Nambiar, head of the AI practice at IT services group Mindtree, says: “First, as the power of computers continues to obey Moore’s Law, an increasing number of powerful processors are becoming cheaper and cheaper. Critically, this means that Deep Learning networks—a key element of the development of AI in computing—have become much easier to build and train.
“Second, the internet has undoubtedly changed the way we connect, interact and communicate today, as has the development of mobile technology. When combined, this results in the generation of an inordinate constant flow of data. Simplistic or complex, big data has changed the way we gauge the impact of technology in this era of digital disruption.
“These two factors have made it possible to build and train neural networks on a scale never previously witnessed, thus enabling the current wave of AI to flourish. This neural network-based computing has been responsible for the shift away from trying to construct progressively more complex rule-based computer systems, to systems that are actually capable of learning, adapting and evolving independently. This means they are capable of resolving problems unassisted as a result of such learning abilities.
“Fundamentally, these developments have gifted today’s computer systems with the innate ability to learn as human brains do. Whether its learning a foreign language, or something as simple as just crossing the road, our brains are not hard-wired to do these things by a defined set of step-by-step instructions. Neural networks seek to mimic this process with processors merely replacing the neurons in the human brain.”
What are the biggest myths you hear or read about AI?
“That AI is about replicating the human mind,” says IBM CTO for Watson Solution Rob High. “And there was once a time where scientists were trying to do just that. In reality, AI and cognitive systems like Watson augment human intelligence. There’s a critical difference between systems that enhance and scale human expertise and those that attempt to replicate human intelligence. AI can be best described as an augmented intelligence tool. It is about man plus machine. The AI often depicted in movies, popularised by Hollywood and science-fiction writers, is out of sync with reality and gets confused with real concerns in making sure algorithms today are open and fair. The truth is less sensational and far more meaningful.”
Vectra’s Kazerounian says it’s the idea that AI is inaccessible or expensive. “In today’s world of cloud computing, a user armed with a laptop and an internet connection can spin up a cluster of computer nodes with world-class hardware and build arbitrarily complex neural networks—all by simply using open source software and publicly available datasets. What was once relegated to a select and exclusive group of academics, entrepreneurs and enterprises has now become easy enough to grasp for anyone with the basic technical skills and the inclination to learn. It has also become much simpler to understand them. While calculus used to teach a neural network to make predictions has been explored in great depth, modern systems do the heavy lifting for the developer. With the advent of high-level AI packages, knowledge of the underlying maths is hardly even necessary for the production of world-class AI.”
Kemp Little’s Joint adds: “There is currently a large amount written about the future workplace and the removal of humans from the workforce—to be replaced by AI completely. What we are already seeing with AI, in our workplace, is that AI augments the human worker and changes the scope of role of the human, but doesn’t necessarily replace the human role.”
Sophia Genetics’ Camblong says it’s a myth that AI will replace doctors: “With AI, technical or back-office work can be fully and easily automated, giving back precious time that clinicians can spend with their patients [but] the human aspect of their profession is even more valued: that is, their intuition and capacities to listen, trust, deliver advice, empathise, to eventually decide on the best care path. Also, despite what we often hear, the only way to build something solid in AI is bottom up with the help of the end-users, and this is particularly true in healthcare.”
Mindtree’s Nambiar weighs in with a few more: “A common misconception is that AI is a single technology. AI is an umbrella term for various algorithms and models which, when combined with large volumes of data, create systems with certain characteristics. Second, AI cannot yet be classed in the same bracket as human intelligence. Even those systems at the forefront of this technology, such as [Google’s acquisition with DeepMind] AlphaGo, for instance, are only designed and have reached the level where they are capable of performing specific, defined tasks. AI systems that are capable of absorbing information in a manner akin to that of the human brain are perhaps decades in the making.
“Another misbelief is that the future will be controlled by those who patent new algorithms. The notions of innovation and protecting IP are constantly changing in the world of AI. There is now a firm realisation that the pooling of both innovation and the collection of algorithms powering today’s AI systems is mutually beneficial for everyone. DeepMind for example, Google’s recent acquisition, made the publishing of its research, even after the acquisition, a condition of the takeover. However, it is still capable of protecting its first-mover advantage by virtue of the data it holds, allowing it to continue training its AI systems.
“And, finally, perhaps the most radical notion out there, the suggestion that physical human intelligence will become distilled into one form of AI or another is so radical that people aren’t even contemplating investing in it. The concept of ridding ourselves of our carbon-based forms, and having our thoughts, memories and emotions—quite literally everything that makes us human in fact—[preserved] has been considered very seriously by some. What we can say is that cryogenic freezing companies will very happily charge hundreds of thousands of pounds to freeze humans alive, and for you then to be born again at some point in the future, but it simply isn’t possible to put a date on when this will become a reality.”
Do you think the term is being bandied about in a careless manner?
IBM Watson’s High: “Yes, AI is overused and its definition often misconstrued. The true goal of AI is to augment intelligence and a lot of people do not make this distinction, nor are they aware of the underlying algorithms that AI employs, including deep learning and machine learning. An engine is just one component of a car. In the same way, machine learning and deep learning algorithms are important features but the real recipe comes when you take those algorithms and combine them with other forms of data and analytics to create an augmented intelligent system.”
Vectra’s Kazerounian says there’s certainly a lot of hype: “We are also finding that more and more companies are referring to traditional mathematical and statistical modelling techniques as AI or under the umbrella of AI. Due in part to the hype, but also to a set of shifting goalposts, the definition of AI is evolving and broadening. With each new development, AI is redefined to cover a set of tasks that appear just beyond our capabilities. Simpler applications once believed to require true intelligence are quickly relegated to the subterranean netherworld of simple and mechanical behaviours.”
Sophia’s Camblong: “I believe we should refocus the discussion on understanding the needs and feeding AI with high-quality raw data. If you think about healthcare, this has a direct impact on clinical decisions. For us, talking about AI has one meaning: saving patients’ lives.”
What do you see as current opportunities for AI and what do you see in the future?
“There is an opportunity for cognitive systems to help aspire people and see through their point of view and also beyond their own biases … and posing questions that we would not otherwise think to ask,” says IBM Watson’s High. “Cognitive technologies also help people make better decisions. What search is to simple information retrieval, so cognitive is to advanced decision-making.”
Vectra’s Kazerounian: “In cybersecurity detection, AI can be used to automatically monitor network traffic, flag suspicious behaviour or network anomalies, and alert the security team to investigate. Data traffic has grown exponentially over the last decade, making it a near-impossible task for humans to monitor the vast volume of data in real-time, 24/7. Future models of intelligent behaviour will evolve beyond the current singular activity and action model we see, to become more multi-skilled, using the notion of reward to help educate systems when they successfully evolve to understand a new function. This is based on the notion that the only way to learn to act in an environment is through reward in the first place. They will begin to incorporate principles observed, whilst learning to predict the resulting displacement as a consequence of the commands.”
Kemp Little’s Joint provides a legal perspective: “We can already see great AI benefits in relation to low-level, large scale, repetitive, review tasks. [This] allows us to offer better value services to clients and allows our lawyers to focus on those tasks which AI cannot perform. As the range of legal activities that AI cannot replace is likely to reduce, the opportunity to develop technologies that can better replicate some aspects of legal services will exist for years to come.”