In brief

In brief

  • Using AI doesn’t come risk free. Businesses need to consider issues like trust, liability, security, and control.
  • Businesses need to consider a responsible approach to AI governance, design, monitoring, and reskilling.
  • The explainability of AI decision making is vital for maintaining public trust.


As with all innovation, new opportunities with AI don’t come without risk.

It helps to begin any AI journey with a clear view of the possible risks in four key areas:

  • Trust. How do we demonstrate that AI is responsible, ethical, and safe to use? How do we prevent unconscious biases from being embedded from the outset?
  • Liability. What happens when AI makes an error—or even breaks the law? Who is legally responsible?
  • Security. How do we prevent unauthorized or malicious manipulation of AI?
  • Control. What happens when a machine takes over a process? How does a human take it back if they need to?

Responsible AI

To find the answers, a business must take a responsible approach to AI by placing humans at the center of its thinking. It can mitigate the risks that AI brings by adopting four imperatives:

  • Govern. Create a governance framework that allows AI to flourish while remaining tied to the organization’s core values and ethical guardrails.
  • Design. Build trust into AI from the outset, accounting for privacy, transparency, and security from the very first design stages.
  • Monitor. Define the metrics by which the business judges AI’s success or failure—including factors like accountability, bias, and security— and audit performance regularly.
  • Reskill. Democratize and socialize AI understanding and skillsets across the business.

What is Responsible AI | Accenture

Accenture's video discusses responsible AI that addresses the gap in making the ai systems more transparent, reliable, and interpretable. See more.

View Transcript

Explainable AI

What’s the one thing that will, above all, help build public trust in AI? Explainability. In other words, being ready to explain how AI came to a decision or solution. Explainability is already a compliance challenge in regulated industries like financial services. But there’s a broader consideration too: humans are inherently more likely to trust something they understand.

It’s not always easy to achieve. By their very nature, machine learning algorithms and deep learning neural networks can be “black boxes,” making it very hard to explain how they come to their decisions. But explainable AI solutions are now emerging. And they’re going to be a vital part of using the technology in the years to come.

Ray Eitel-Porter

Managing Director – Accenture Digital, Head of Applied Intelligence UK & Ireland


Dr. Athina Kanioura

Chief Analytics Officer and Global Lead – Applied Intelligence

MORE ON THIS TOPIC


Subscription Center
Stay in the Know with Our Newsletter Stay in the Know with Our Newsletter