Skip to main content Skip to Footer

BLOG


May 23, 2019
Artificial intelligence: Powerful, adaptable, valuable – and vulnerable
By: Malek Ben Salem

Technology doesn’t exist in a vacuum. It has a direct impact on people’s lives, and what’s more, that impact changes over time. Advances in the technology itself, as well as innovations in other areas, can expand its impact – or create new challenges.

Part of my role at Accenture Labs relates to exploring security challenges around innovative technologies and looking ahead to plan for potential issues before they arise. Today, organizations across industries are increasingly using artificial intelligence to augment human decision making, to the great benefit of both employees and customers. But continued adoption and use of those AI systems requires the trust of the people working with it. To establish and maintain that trust, companies must ensure that their AI systems are reliable, explainable, ethical, fair, responsible, safe, privacy-preserving – and robust against malicious attacks.

And in fact, many machine learning and deep learning approaches used today are vulnerable to what we call “adversarial attacks.” Adversaries can use modified input data to make a machine learning algorithm behave in unexpected ways. An attacker can fool facial recognition systems by wearing special glasses, making the algorithm mistake him or her for a celebrity figure. Researchers have also shown that they can trick the kinds of computer vision algorithms designed for self-driving cars to mistake stop signs for speed limit signs – and these are just early examples of adversarial AI.

What’s more, many machine learning models are trained on public datasets or data from sources that are not fully trustworthy. This increases the potential for adversaries to compromise models and manipulate their outcome: they can disrupt the training pipeline by feeding in carefully crafted samples of data. This type of attack, known as a poisoning attack, allows adversaries to insert backdoors into the machine learning model itself.

If we’re training an AI algorithm to recognize traffic signs, an adversary may feed it some examples of stop signs with yellow squares on them, while labelling each sign as a speed limit sign rather than a stop sign. If an attacker successfully disrupts the training pipeline in this way, and we then deploy the system in a self-driving car, our adversary could cause the car to confuse a stop sign for a speed limit sign just by putting a yellow sticky note on it. A human seeing a stop sign with a sticky note on it would still know to stop; an AI system trained with the poisoned data would have learned that such a sign with a yellow square on it was a speed limit sign and would drive right past it.

Not all types of AI are equally vulnerable to adversarial attacks; machine learning, and more significantly, deep learning algorithms are vulnerable. But adversarial attacks like this have already been demonstrated against systems designed for spam filtering algorithms, computer network intrusion detection systems, and even biometric authentication. As organizations look to apply AI solutions in more spaces, the scope of these types of attacks will only increase. Vulnerable machine learning algorithms have introduced an entirely new attack surface to organizations – one that is not protected through classical application security and network security defences.

Here at Accenture Labs in DC, we’ve built a demo that demonstrates AI’s vulnerability to small perturbations of the input data. Think about depositing a check: an application at the ATM, or your mobile banking app, scans the check or a photo of it, and recognizes the payment amount on the check. But an attacker can carefully craft a new image of the check with some added visual “noise.” The noise makes the payable amount look unchanged to the human eye, while tricking the optical character recognition (OCR) system into recognizing a different payable amount chosen by the attacker.

The attack is made possible just by changing certain pixels in the check image, and what’s worse, there’s more than one way to do it. An attacker can change the digital image of the check by compromising the OCR system in the ATM or the bank’s mobile app, or even directly changing the physical check itself.

Our demo shows the risks that companies already face today. Looking forward, we are developing ways to de-risk the use of AI for our clients and to defend AI from adversarial attacks. This attack surface may only just be emerging, but it’s critical that organizations take adversarial AI into consideration in their security strategies as they integrate artificial intelligence further and further into their organizations. As with any technology, new challenges appear as AI becomes more pervasive – and adversarial AI is one challenge that companies must work to meet head on.

For more information about adversarial AI, or to work with us on the best techniques and practices for the development and use of AI that is resilient and robust against adversarial attacks, see “The new cyberattack surface: Artificial intelligence,” or contact Malek Ben Salem.

Popular Tags

    More blogs on this topic

      Archive