In brief

In brief

  • Artificial intelligence’s growing pervasiveness gives rise to “adversarial AI,” as attackers exploit machine learning models.
  • For many businesses, security investments are primarily dedicated to hardware and software attack surfaces, leaving a key vulnerability exposed.
  • Creation of robust, secure AI must be a top priority for all organizations; otherwise, AI models can and will be exploited.
  • Accenture research shows that protecting the AI attack surface is a complex challenge requiring multiple approaches to ensure robust security.


Know your threat

Adversarial AI causes machine learning models to misinterpret inputs into the system and behave in a way that’s favorable to the attacker.

To produce the unexpected behavior, attackers create “adversarial examples” that often resemble normal inputs, but instead are meticulously optimized to break the model’s performance.

Attackers typically create these adversarial examples by developing models that repeatedly make minute changes to the model inputs.

Eventually these changes stack up, causing the model to become unstable and make inaccurate predictions on what appear to be normal inputs.

What makes adversarial AI such a potent threat? In large part, it’s because if an adversary can determine a particular behavior in a model that’s unknown to developers, they can exploit that behavior. There’s also the risk of “poisoning attacks,” where the machine learning model itself is manipulated.

How can the AI attack surface be protected?

Secure your AI models – time to get started

While AI attack surfaces are only just emerging, business leaders’ security strategies should account for adversarial AI, with an emphasis on engineering resilient modelling structures and strengthening critical models against attempts to introduce adversarial examples. Your most immediate steps include:

Step 1 – Conduct an inventory to determine which business processes leverage AI, and where systems operate as black boxes.

Step 2 – Gather information on the exposure and criticality of each AI model discovered in Step 1 by asking several critical questions, including:

  • Does it support business-critical operations?
  • How opaque/complex is the decision-making for this process?

Step 3 – Prioritize plans for highly critical and highly exposed models, using information you acquired in step 2, and create a plan for strengthening models that support critical processes and are at high risk of attack.

Read the full Accenture Labs report for more about protecting your AI attack surface.

Create robust, secure AI

Business leaders need to combine multiple approaches to ensure robust, secure AI. Our research reveals four essential steps:

Rate limitation

By rate-limiting how individuals can submit a set of inputs to a system, effort is increased. That’s a deterrent to adversarial attackers.

Input validation

With a focus on what’s being put into your AI models, and by making modifications, it’s possible to “break” an adversary’s ability to fool a model.

Robust model structuring

The structuring of machine learning models can provide you with some natural resistance to adversarial examples.

Adversarial training

If enough adversarial examples are inserted into data during the training phase, a machine learning algorithm will learn how to interpret them.



Subscription Center
Stay in the Know with Our Newsletter Stay in the Know with Our Newsletter