Building trust in machine learning and AI

If we understand how discrimination and bias enter algorithmic decision making, we have an opportunity to eliminate them

machine learning misunderstandings 1
Thinkstock

Many machine learning and artificial intelligence (AI) systems lack the ability to explain how they work and make decisions—and this is a major trust inhibitor.

They can find patterns in data that elude us, patterns that might reveal important relationships that improve the accuracy of the algorithm. They can recover patterns and relationships that we as human beings want to ignore. But they can just as easily fail to discover important relationships and produce bad recommendations, even dangerous ones.

A well-known example of the latter involved research to see whether machine learning could guide the treatment of pneumonia patients. The team was trying to predict the risk of complications in pneumonia patients where low-risk patients could receive outpatient treatment. A rule-based machine learning system decided that pneumonia patients who also had asthma could be sent home—because they experienced few complications from pneumonia. However, the reason patients with asthma and pneumonia experienced few complications was because they received intensive care at the hospital. The important connection between patient condition and quality of care was not reflected by the machine learning algorithm.

Fortunately, this rule-based system was transparent and easily understood. The flaw in its logic was discoverable, and the system could be corrected.

But suppose a deep neural network—the technology on which much of modern machine learning and artificial intelligence is based—was used instead. It likely would have found the same pattern in the data, but it would have been much more difficult to discover the flaw in the prediction logic, let alone correct it.

This story and others of incorrect conclusions and unintentional biases that can exist in algorithms points to the need for more focus on explainable AI or FAT (fair, accountable, transparent) AI.

Explainable, transparent algorithms

Using algorithms and mathematical rules for decision making is nothing new. The pneumonia story took place in the 1990s before the hype about deep learning. However, the need for fair, unbiased, transparent decisions has not been a topic of widespread public discussion—until now. What has changed?

  • Algorithms and automation are more pervasive. More business problems are addressed through analytics, and machine learning and AI are showing up in domains where the human expert was traditionally uncontested—for example, medical diagnosis.
  • Analytics is more efficient. An algorithm can process much more data than human operators. Their biases and erroneous decisions can also spread more quickly.
  • Regulations require more transparency. Personal data protection and privacy regulations—such as the General Data Protection Regulation (GDPR) of the European Union—increasingly demand a right to explanation.
  • Algorithms are more opaque. In the era of big data, more features can be incorporated into models and can be engineered automatically. Instead of a model with 25 parameters and transparent logic, the same basic model can now can have thousands of parameters and is much less understood. Neural networks with millions of parameters are now commonplace.
  • Techniques are more complex. While a single decision tree is easily understood, using a random forest in which many trees vote on the answer adds complexity. Using gradient boosting with observation-specific weights, determined by the algorithm, adds opacity.
  • Learning is taking on more forms. Consider an AI system that learns by imitating a human operator, what do we know about how it works? Even if we can know what it learned, is that sufficient to trust a system without knowing how it learned? Training a neural network imprints its logic; we do not control the logic or have simple means to correct its errors.

Can you innovate and stay transparent?

With advances in machine learning and AI, model complexity increases. The complexity that gives these models their powerful predictive abilities also makes them difficult to understand. The algorithms that deliver black-box models do not expose their secrets. They do not, in general, provide a clear explanation of why they made a certain prediction.

For data scientists, users, and regulators to feel comfortable with the newest AI software, we need to make the models more trustworthy and reliable.

  • Data scientists want to build models with high accuracy, but they also want to understand the workings of the model so that they can communicate their findings to their target audience.
  • Users want to know why a model gives a certain prediction. They want to know how they will be affected by those decisions and whether they are being treated fairly. They need to have sufficient information to decide whether to opt out.
  • Regulators and lawmakers want to protect people and make sure the decisions made by models are fair and transparent.

All three groups share similar needs. So now the challenge for the industry is to figure out the best way to stay on the cutting edge of innovation while creating solutions and products that are trustworthy and transparent.

What you end up with is a conundrum of what versus why. Do you want to use a model that gives you more advanced predictive capabilities or a model that is fully explainable? The most advanced algorithms are often the most accurate and at the same time least explainable. David Gunning, a program manager at DARPA, summarized the what versus why trade-off in a chart (slide 4) that shows the progression of methods from least explainable to most explainable.

The FAT machine learning community of researchers has created basic Principles for Accountable Algorithms. Before outlining the principles, they made a note:

Algorithms and the data that drive them are designed and created by people. There is always a human ultimately responsible for decisions made or informed by an algorithm. “The algorithm did it” is not an acceptable excuse if algorithmic systems make mistakes or have undesired consequences, including from machine-learning processes.

This statement underlines the importance of accountability at a deep level. The people and companies behind the algorithms cannot plead ignorance; they must be as involved in the process of understanding the inner workings of the model as they are with its outcome.

Strides toward the greater good

Algorithms carry with them the quality and values of what created them: the data itself, how the information in the data was used, and the models that are being built. Whether this is a classical statistical model, a decision tree or a deep learning model, the considerations of fairness, representativeness and unbiasedness are the same.

Discrimination and bias are nothing new. Human decision making is biased. Biased, unfair decisions—whether by man or machine—can lead to widespread discrimination. If we understand how discrimination and bias enter algorithmic decision making, then we have an opportunity to eliminate them. That is what fascinates me about fair, accountable, transparent algorithms. I am excited about the opportunity in front of us to create algorithms that are carefully designed to avoid or reduce bias—algorithms that can help lead to widespread elimination of discrimination.

In software development, we have many by-design principles—concepts that are baked into the products from the earliest design phase. There is security by design, privacy by design, cloud by design, and so on. If we can add fairness by design, accountability by design and transparency by design, then we can truly improve lives through analytics.

This article is published as part of the IDG Contributor Network. Want to Join?