Know your threat
Adversarial AI causes machine learning models to misinterpret inputs into the system and behave in a way that’s favorable to the attacker.
To produce the unexpected behavior, attackers create “adversarial examples” that often resemble normal inputs, but instead are meticulously optimized to break the model’s performance.
Attackers typically create these adversarial examples by developing models that repeatedly make minute changes to the model inputs.
Eventually these changes stack up, causing the model to become unstable and make inaccurate predictions on what appear to be normal inputs.
What makes adversarial AI such a potent threat? In large part, it’s because if an adversary can determine a particular behavior in a model that’s unknown to developers, they can exploit that behavior. There’s also the risk of “poisoning attacks,” where the machine learning model itself is manipulated.
Secure your AI models – time to get started
While AI attack surfaces are only just emerging, business leaders’ security strategies should account for adversarial AI, with an emphasis on engineering resilient modelling structures and strengthening critical models against attempts to introduce adversarial examples. Your most immediate steps include:
Step 1 – Conduct an inventory to determine which business processes leverage AI, and where systems operate as black boxes.
Step 2 – Gather information on the exposure and criticality of each AI model discovered in Step 1 by asking several critical questions, including:
- Does it support business-critical operations?
- How opaque/complex is the decision-making for this process?
Step 3 – Prioritize plans for highly critical and highly exposed models, using information you acquired in step 2, and create a plan for strengthening models that support critical processes and are at high risk of attack.
Read the full Accenture Labs report for more about protecting your AI attack surface.
Create robust, secure AI
Business leaders need to combine multiple approaches to ensure robust, secure AI. Our research reveals four essential steps: