No one wants to feel like the world is rigged against them or that impersonal forces call all the shots. By the same token, no one wants to be subject to the whims of autocrats who have no concern for anyone but themselves.
Algorithmic decision automation is a powerful force in today's world. As a recent New York Times article calls out, big-data-infused algorithms are the smarts behind targeted marketing, advertisements, recommendations, experiences, and practically every other transaction and interaction in our online world.
You may not know the precise assemblage of data, statistical models, business rules, or other algorithmic components that power any specific automation scenario in which you're enmeshed. But you know that some dynamic blur of that drives it all at every moment in time.
That's simply the way the world works now, for good or bad. I'm not sure whether algorithmic accountability -- in other words, a full and transparent reckoning of the data, statistics, contextual variables, and other factors powering an automated decision -- is feasible in all circumstances. The year before last, I expounded at length on the issues surrounding algorithmic accountability. To sum up that discussion, I'm not opposed to it, and it certainly is a needed check and balance on decision automation processes, but I doubt it would be as feasible in practice as its advocates believe.
That's one issue, but there's another check and balance on algorithmic overreach that might be more feasible. I'm referring to the possibility that humans -- in other words, managers and others in organizations that operate these automated systems -- should be able to override algorithmic decisions in various circumstances. It's not a question of whether it's technically feasible: Manual overrides are built into many business-process workflows.
Where algorithmic decisions are concerned, the heart of the issue is whether manual overrides are advisable in every circumstance. There are pros and cons for manual overrides, as discussed in the New York Times article:
Pros of manual override
The core advantage is that manual overrides enable organizations to deal with cases that the algorithmic decisions don't address at all, or at least not adequately in all circumstances, and in which human judgment and flexible response are a necessary corrective. Overrides also cover the more critical situations where the algorithmic decisions area usually best, but which have such high-impact downside outcomes -- such as when people's lives, liberty, health, and safety hang in the balance -- that the (admittedly minuscule) risks of the wrong decision demand a fail-safe human "second opinion."
Cons of manual override
The impersonality, consistency, and audit trail of automated decisions is inherently fair in the same way that the "rule of law" is fair in democratic societies. When you introduce the possibility of appeals -- in other words, manual overrides by a "higher court" of human judgment -- you also introduce the potential for bias, inconsistency, favoritism, and corruption. You introduce the delays associated with manual decision processes. And you risk watering down the audit trail of factors that caused person A to interpret, bend, or circumvent the rules in the case of person X.
Another downside of overrides is the inefficiency of manual processes, which require hiring and allocation of staff resources, increases in administrative overhead, and diversion of budget from other necessary programs. Those costs may need to be passed onto all customers, including the majority, whose cases rarely require overrides. Yet another disadvantage of overrides is that human judgments may, upon further examination (in the algorithmic accountability process) prove to be suboptimal in most cases.
What's the right decision: Let the bots rule or let humans throttle them? That's not an issue you want the bots to decide. They're not all-knowing.
But ask yourself: Does your organization have humans with the wisdom and perspective necessary for gauging when the bots truly need to be reined in?