Does AI-driven cloud computing need ethics guidelines?

It’s clear that artificial intelligence is powerful and cost-effective in the public cloud. However, it can be weaponized for unethical tasks.

Does AI-driven cloud computing need ethics guidelines?
PeopleImages / Getty Images

Just ask any marketing person—it’s their job to keep demand for a product or service high. So they depend on advertising and other methods to create brand recognition and a sense of demand for what they sell.

These days marketing firms are even more clever, recruiting social media influencers who promote a product or service directly or indirectly—sometimes without disclosing that they are a paid lackey. 

We’re getting better at influencing humans, either by using traditional advertising methods, such as keyword advertising, or, even scarier, by leveraging AI technology as a way to change hearts and minds. Often “the targets” don’t even understand that their hearts and minds are being changed.

Researchers have discovered a challenge presented by the AI-powered speech generator GPT-2, released by OpenAI in 2019. The AI research lab’s chat tool excited the tech community with its capability of generating convincingly coherent language from any type of input.

Shortly after GPT-2’s creation, observers warned that the powerful natural language processing algorithm wasn’t as innocuous as people thought. Many pointed out an array of risks that the tool could pose, especially from those who might seek to weaponize it to do less-than-ethical things. The core concern was that text generated by GPT-2 could persuade people to break ethical norms that had been established during a lifetime of experiences.

This is not Manchurian Candidate stuff, where you’ll be able to activate a zombie-like killer, but really more gray-area decisions. Consider, for example, a person who will likely not stretch the rules for personal gain, such as stealing a customer from another salesperson. Can that moral person be swayed by an AI system that’s able to influence human behavior by leveraging its training? 

Cloud computing has made AI systems affordable and easy to leverage as a force multiplier of existing or net-new business applications. For example, if a sales processing system could convince buyers to purchase just 2% more using AI influences, that could mean as much as a billion dollars in additional profit with minimum investment.

The true question is: Even if we can, should we?

I’ve been doing AI since my college years, and among the reasons it’s so interesting, is how you can set up these systems to learn independently and change behaviors based on their learning over time. For years people have predicted the impending domination of our new robot overlords, but AI is still just a tool and should not be a threat—yet.

Although many are calling for guidelines and even government regulation of the potential use and abuse of AI (mostly cloud-based), I’m not sure we’re there yet. I do think we’ll see some questionable uses of this technology, much the same as tracking apps on our phones during the past few years, but this stuff is largely self-regulating.

If companies or governments are outed for weaponizing this technology in such a way that public reaction is negative, public pressure will be the regulating mechanism. As with any technology, misuses will have to be looked at over time. I have some confidence that human intelligence will do the right thing with nonhuman intelligence, at least for now.

Copyright © 2021 IDG Communications, Inc.

How to choose a low-code development platform