How to prevent the hacked AI apocalypse

Adversarial attacks can undermine AI systems, sidelining their intelligence and hijacking them for evil. But there are emerging techniques to block such attacks

Become An Insider

Sign up now and get FREE access to hundreds of Insider articles, guides, reviews, interviews, blogs, and other premium content. Learn more.

Adversarial attacks are an increasingly worrisome threat to the performance of artificial intelligence applications. If an attacker can introduce nearly invisible alterations to image, video, speech, and other data for the purpose of fooling AI-powered classification tools, it will be difficult to trust this otherwise sophisticated technology to do its job effectively.

Imagine how such attacks could undermine AI-powered autonomous vehicles ability to recognize obstacles, content filters’ effectiveness in blocking disturbing images, or in access systems’ ability to deter unauthorized entry.

Some people argue that adversarial threats stem from “deep flaws” in the neural net technology that powers today’s AI. After all, it’s well-understood that many machine learning algorithms—even traditional logistic-regression classifiers—are vulnerable to adversarial attacks. However, you could just as easily argue that this problem calls attention to weaknesses in enterprise processes for building, training, deploying, and evaluating AI models.

None of these issues are news to AI experts. There is even a Kaggle competition focused right now on fending off adversarial AI.

To continue reading this article register now