Artificial intelligence represents a brave new world. A world that’s full of possibilities, but also potential problems. The possibilities are as wide ranging as AI helping doctors and scientists come up with better cancer treatments, to being used as a lethal weapon or worse, threatening humanity as a whole, as Elon Musk and Stephen Hawking have forewarned.
While we’re decades or more away from the possibility—no matter how remote—of robots taking over, AI is already being misused for bad purposes, such as spreading disinformation and creating polarization, something that we’ve seen all too well in today’s current political climate.
Congress has begun to consider AI regulations and introduced three bills at the end of 2017, two of which address autonomous driving: the Self Drive Act and AV Start Act, and the third, which would appoint an advisory committee, the Future of AI Act.
The industry is responding as well. Organizations are being formed to address these issues, such as OpenAI, created by Elon Musk and others, to research potential AI issues and promote the safe use of the technology, and the Partnership on AI, founded by Google, Facebook, Amazon, Microsoft, and IBM, which was established to develop best practices, and create a dialog on the potential influences of the technology. Similarly, the Ethics and Governance of Artificial Intelligence Fund, which was initiated by the Knight Foundation, Omidyar Network, and LinkedIn founder Reid Hoffman, was created to fund initiatives that explore ethical AI issues and how to manage this technology.
So, how do we establish safeguards to protect citizens without stifling innovation and advancement? It is a delicate balancing act. While I don’t believe we need to implement broad-reaching regulations just yet, here are some areas where regulations could help right now:
Ensuring people receive fair and equitable treatment
People have a right to know how decisions are being made that affect them; for example, the criteria banks are using to determine whether a loan is approved or denied. While it might seem easy to think that it is automated system that doesn’t have the biases that humans have, when you peel back the layers you see that this isn’t the case. The algorithms helping to make this decision are based on the data that it receives, which itself can include inherent biases. Are people living in certain ZIP codes often declined? And because the algorithms are written by people, specifically data scientists, they may reflect their biases, as well.
We can help address this issue by ensuring that we have enough data in an AI system so it is not skewed, perhaps unintentionally, to a specific result. Also, it’s important to provide transparency into the algorithms and how the decisions are being made to help ensure that these decisions are ethical and comply with existing regulatory standards.
Safeguarding consumers
Retailers are collecting data on our online purchases all the time. We see how this information is being used innocuously by sites like Netflix or Amazon, which suggest future interests based on past behavior. At some point, however, retailers could decide to use this information to implement “optimal pricing,” which would charge customers different amounts of money for the same product based on what the company determines they would be willing to pay. Companies can use the customer behavior data they collect to develop algorithms that would enable this discriminatory pricing practice. For that reason, this is an area that should be closely watched, and safeguards should be put in place to protect consumers.
Directing autonomous vehicles
With driverless cars, we’re ceding the split-second decision—sometimes pertaining to life and death—to a software program. This raises many ethical issues, such as whether a vehicle should swerve to hit two elderly people instead of one young person. This type of ethical dilemma is based on the hypothetical trolley problem. Germany has gotten in front of this ethical issue with new regulations determining that these vehicles should operate in a manner that would cause the least amount of injury, regardless of age, race or another factor. Researchers at MIT and Carnegie-Mellon propose using AI software and a crowd sourcing model to determine the appropriate ethical behavior that would be programmed into AI systems for these types of situations.
Protecting critical infrastructure
Imagine allowing AI software to totally control our nuclear power plants or missile defense, without regulating their use. The latter scenario has already been imagined in the Terminator series’ Skynet, a computer system originally deployed to enable a quick response to an enemy attack without human error. Spoiler alert for those who haven’t seen the movie: It doesn’t turn out well for people; Skynet initiates global nuclear war in an effort for self-preservation.
Health care
AI already is helping providers make more informed decisions about patient care and predict future outcomes. It’s critical, however, that the patient privacy rights under HIPAA regulations be safeguarded. Additionally, any AI software used to make health care insurance decisions on which services to allow or deny will require scrutiny and oversight to ensure there is no bias.
In general, when AI software is used to deny services, control volatile or dangerous equipment, or when ethical issues are involved, we need to think about protecting the safety and well-being of people everywhere with sound and prudent governance.
On the other hand, let’s only regulate or oversee those areas of AI that need it. If we take too heavy a hand, we will stifle innovation and the far-reaching benefits that AI promises to deliver.