Explainable AI: Peering inside the deep learning black box

Why we must closely examine how deep neural networks make decisions, and how deep neural networks can help

Explainable AI: Peering inside the deep learning black box
Murat Göçmen / Getty Images

The claim that artificial intelligence has a “black box” problem is not entirely accurate. Rather, the problem lies primarily with deep learning, a specific and powerful form of AI that is based on neural networks, which are complex constructions that mimic the cognitive capabilities of the human brain.

With neural networks, the system’s behavior is a reflection of the data the network is trained against and the human labelers who annotate that data. Such systems are often described as black boxes because it is not clear how they use this data to reach particular conclusions, and these ambiguities make it difficult to determine how or why the system behaves the way does.

Explainability, then, is the ability to peek inside this black box and understand the decision-making process of a neural network. Explainability has important implications as it relates to the ethical, regulatory, and reliability elements of deep learning.

To continue reading this article register now