In brief

In brief

  • Artificial intelligence is likely to become a new driver of economic value for organizations.
  • But attacks on AI continue to emerge.
  • Organizations’ future security strategies should take into account all risks and threats to AI.
  • Emphasis should be on building resilient modelling structures and strengthening critical models against cyberattacks.

Making sense of AI

Artificial Intelligence (AI) is all around us—from satellite navigation systems that use real-time traffic information to calculate the fastest route home, and virtual assistants that monitor and control a room’s environment, to recommendations for the next movie. Given the many uses of AI, defining this technology can present a challenge.

Our report, a collaboration between Accenture and the National Association of Corporate Directors (NACD), defines AI as a collection of technologies used to train a machine to emulate human tasks through learning and automation. The technologies enable a machine to sense its environment, assess relevant factors, act on information and learn how to improve future performance.

Surviving AI winters

AI is far from a new idea. In fact, the term "artificial intelligence" was coined as early as 1956. AI’s history is characterized by waves of optimism, disappointment and inertia. Previous breakthroughs have only partly lived up to the hype generated, and none have managed to kick-start the technology into the mainstream. So, what is different now?

We believe that AI technologies can cause disruption not only through direct contributions to society, but also through spillover effects, which can help enable a vast range of complementary innovations. In short, it has the potential to transform the way we work and live.

How does AI work?

The ability to learn is a fundamental characteristic of AI. Identifying which actions are required to complete a task by analyzing data, rather than being explicitly coded to act in a pre-determined way, is what makes the modern form of AI "intelligent."

But there is a limit to intelligence with traditional machine learning processes—improvements can reach a plateau and performance can tail off. Improved understanding of how the brain works to obtain knowledge has led to the development of a new field in machine learning—called deep learning—which is more scalable and theoretically has no limit to intelligence.

The 5 steps for the AI Machine Learning Process

Steps in AI machine learning process: 1. Get data 2. Clean, prepare & manipulate data 3. Train model 4. Test data 5. Improve

What are the opportunities?

AI is likely to become a new driver of economic value for organizations. But businesses can find it difficult to use this technology without first understanding the opportunities it presents.

To set a clearer path forward, corporate leaders can consider the following:

  • Review and, where appropriate, introduce automation into business processes
  • Assess how AI can augment employees’ current work
  • Avoid concentrating or limiting this technology within particular business units or functions

To effectively exploit AI, leaders should first identify the business case for adoption or implementation.

What are the risks?

Cyber risks fall into two broad categories: data integrity and algorithm manipulation. The learning and decision-making capabilities of AI can be altered by threat actors modifying the data used in the training process. The algorithms themselves should also be protected from manipulations by threat actors hoping to change the outcomes of AI systems for malevolent purposes.

Four principal risks should be considered in the near-term: Trust and transparency, liability, control and security.

Trust and Transparency

Complex forms of AI are often “black boxes.” This is a challenge and new approaches are needed to offer better explanations of the processes underlying AI decisions.


What happens when AI makes an error—or even breaks the law? Executive leaders should carefully monitor changes in legislative and regulatory requirements to ensure compliance.


What happens when a machine takes over a process? Careful thought is needed about when and how control is shared or transferred between humans and AI.


How do companies prevent unauthorized or malicious manipulation of AI? Security has become paramount, but there is a current lack of protection to both AI models and the data used to train them.

View All

Responsible AI

Attacks on AI continue to emerge and organizations’ future security strategies should take account of all risks and threats to AI. The emphasis should be on engineering resilient modelling structures and strengthening critical models against cyberattack by malicious threat actors. To help ensure robust and secure AI governance, organizations need to:

  • Limit the AI learning rate
  • Validate and protect AI input
  • Structure AI models for resilience
  • Train AI to recognize attacks

As artificial intelligence becomes more sophisticated, it could start to make or assist decisions that have a greater impact on individual lives, raising ethical challenges around the prominent role of automated decision making in society.

Robert Kress

Managing Director – Accenture Security, Global Quality and Risk Lead


Invest for Cyber Resilience
2019 Cyber Threatscape Report

Subscription Center
Stay in the know with our newsletter Stay in the know with our newsletter