Explainable AI: Peering inside the deep learning black box

Why we must closely examine how deep neural networks make decisions, and how deep neural networks can help

Become An Insider

Sign up now and get FREE access to hundreds of Insider articles, guides, reviews, interviews, blogs, and other premium content. Learn more.

The claim that artificial intelligence has a “black box” problem is not entirely accurate. Rather, the problem lies primarily with deep learning, a specific and powerful form of AI that is based on neural networks, which are complex constructions that mimic the cognitive capabilities of the human brain.

With neural networks, the system’s behavior is a reflection of the data the network is trained against and the human labelers who annotate that data. Such systems are often described as black boxes because it is not clear how they use this data to reach particular conclusions, and these ambiguities make it difficult to determine how or why the system behaves the way does.

Explainability, then, is the ability to peek inside this black box and understand the decision-making process of a neural network. Explainability has important implications as it relates to the ethical, regulatory, and reliability elements of deep learning.

A good example of the black box problem mystified one of our clients in the autonomous vehicle space for months. Minute details notwithstanding, this company encountered some bizarre behavior during the testing of a self-driving car, which began to turn left with increasing regularity for no apparent reason. The designers of the system could make no sense of the behavior.

To continue reading this article register now