Explainable AI: Peering inside the deep learning black box

Why we must closely examine how deep neural networks make decisions, and how deep neural networks can help

1 2 Page 2
Page 2 of 2

In some sense, root cause analysis of deep learning is not unlike introspection and debugging of classical computer code. Both allow an engineer to chart and diagnose the underlying causes of problematic behavior in order to correct it.   

Explaining neural network performance

A second level of explainability enabled by Generative Synthesis technology relates to understanding neural network performance. Specifically, our technology can provide a detailed breakdown of how a model performs for a given task. An initial interface for this feature of Generative Synthesis is shown in Figure 3.

neural network performance DarwinAI

Figure 3. The Generative Synthesis user interface for neural network performance explainability.

In addition to providing a detailed profile of the size and computational cost of each layer in the neural network, Generative Synthesis also provides the unique ability to explain each neuron’s information capacity, which communicates how effectively the neuron is being used (the higher, the better). Such information provides key insights into how the neural network could be improved for particular tasks.

In the coming months, DarwinAI will be releasing the Generative Synthesis tools to enterprise clients, beginning with the aforementioned neural network performance tools.

Bringing explainability to deep learning will result in safer and more robust neural networks, delivering important benefits to businesses in regulatory, accountability, and technical contexts alike. Taking advantage of AI explainability will allow businesses to leverage the correlative insights a neural network has uncovered to improve deep learning models and strengthen internal processes. It will further allow engineers to identify and eliminate functional problems with their models, which are often the byproduct of the data-driven approach inherent in AI.

For all of these reasons, the long-term potential and widespread adoption of deep learning will depend on cracking open its black box.  

Sheldon Fernandez, CEO of DarwinAI, is a seasoned executive and respected thought leader in the technical and enterprise communities. Throughout his career, he has applied emerging technologies such as artificial intelligence to practical scenarios for enterprise clients. He has spoken at numerous conferences including Singularity University, the prestigious think tank in the Bay Area, and has written technical books and articles on many topics including artificial intelligence and computational creativity.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.

Copyright © 2018 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
How to choose a low-code development platform