The cost of an error: Balancing the role of humans and machines

When the cost of an error isn’t trivial, we need better approaches to error mitigation to operationalize and scale machine learning

Errors happen. It’s an unavoidable component of modeling complex systems and decisions. I had the opportunity to ponder this while taking cover in a bus shelter during a sudden Austin deluge. While weather forecasts driven by advanced modeling systems are quite useful, a part of me knows to always hedge against their inherent unreliability.

In this sense it’s not surprising that most of the early success of machine learning in the enterprise has clustered around low-error-cost problems. Models for targeting ads, or recommending products, friends or connections, do not wreak havoc when they misfire. Most end users of the system are not attending closely to the suggestions.

And even if they do see an error it’s trivial enough to be amusing -- why was I recommended a meat grinder with my book of vegan recipes? The occasional success -- an excellent suggestion that inspires someone to click -- is far more important than frequent misfires.

But what about problems where an error is costly, such as supply chain optimization, trade planning and perioperative care? As we integrate data science and machine learning into the enterprise, better approaches to error mitigation are required to operationalize and scale analytics. Central to this effort is an acknowledgement of the distinct ways in which humans and machines err.

We must build analytic systems that effectively combine the domain knowledge, world knowledge and intuition of an organization’s people with the vast data context of its machines.

Classic decision support systems represent a version of this approach, and business intelligence tools are their modern instantiation. By summarizing and visualizing business data, BI software assists executives and decision makers in their reasoning by giving them an accurate view of the past. The difficult work of connecting an understanding of the past to action in the present is left as an exercise for the decision maker.

The promise of machine learning is to build models that can directly suggest or take action. Effectively scaled, this can greatly increase the action-taking bandwidth of the enterprise. Operationalized models at Google, for example, automatically decide which ads to show to which users -- billions of actions per day. But what about use cases where error is costly? It would be ill-advised to have models directly take action.

I recommend an approach that allows for human oversight in these cases. Instead of presenting a small number of business users in the enterprise with historical statistics à la BI, companies need to bring specific recommendations to the thousands of front-line individuals responsible for taking action on behalf of the business.

And ideally, those recommendations feed into the existing applications and processes that they rely on for their everyday work. Those individuals can then leverage their world knowledge and intuition to decide whether to accept or reject the actions proposed by predictive models.

This is the future of machine learning in the enterprise. Historical summaries of business data, while useful, do not tap into its tremendous promise. That said, predictive models often cannot operate in a vacuum. We must federate out their calls to action to those in the organization that can evaluate and execute them. In doing so we move traditional businesses towards a Google-like operational model, while still accounting for the necessity of error mitigation in high value problem domains.

Copyright © 2016 IDG Communications, Inc.