Skip to main content Skip to footer

CASE STUDY

Accenture’s blueprint for responsible AI

How we have operationalized ethical AI in our company and in our client work

5-MINUTE READ

Generative artificial intelligence is transforming how we work and live

Whether mimicking human dialogue or creating original images in an instant, generative artificial intelligence (AI) is reshaping our approach to everyday tasks. But this powerful technology hides a hard truth—while its potential for good is limitless, so are the risks associated with its misuse.

Accenture has long realized the potential for AI to help transform our company, and the need to responsibly develop, design and deploy this fast-growing technology. Early on, Accenture’s leadership knew that significant negative consequences—including bias, hallucinations, workforce displacement, or even cyberattacks—could result from AI’s misuse. For example, consider the copyrighted intellectual property shared publicly on a company’s website. Should another entity be allowed to crawl that company’s assets to train their large language model, even if it’s for non-profit use?

With AI’s rapid rise and its potential to reinvent the future of our business, Accenture wanted to address a fundamental question: How do we govern AI in a responsible manner, and how can we activate its value, mitigate its risks, and build trust with customers, employees and shareholders?

The vast majority (96%) of organizations support some level of government regulation around AI, but only…​

2%

of companies have self-identified as having fully operationalized responsible AI across their organization

Making Responsible AI vital and embedded throughout the organization

It’s vital to scale this technology in responsible, ethical ways, and put AI governance and the responsible use of AI into practice to mitigate any potential risks. That’s why we developed our Responsible AI Compliance program—a program that would ensure Accenture had the tools to protect its own organization and engender trust and confidence in the way we work with our clients. We built this program to help our business use AI effectively and ethically, maximize our investments in this powerful technology and ultimately lead the way in defining the responsible use of AI.

The program rests on a set of principles that we apply to our internal AI systems and the work we do with clients, partners and suppliers. Accenture's Responsible AI principles are:

  • Human by design: Understand AI’s impact on humans. Ensure that impacts are identified and managed.

  • Fairness: All models should treat all groups equitably. Action is needed to mitigate bias and other consequences.

  • Transparency / Explainability / Accuracy: Disclose AI use. Ensure all can understand and evaluate AI outputs and decision-making processes.

  • Safety: Evaluate potential concerns. Take action to mitigate harm when deploying AI.

  • Accountability: Document enterprise-wide governance structures, with clear roles, policies, and responsibilities.

  • Compliance / Data Privacy / Cybersecurity: Ensure AI complies with relevant laws, AI is secured against cyber-attacks, and data is secured with privacy protection.

  • Sustainability: Consider the impact on the planet. Deploy with steps to mitigate any impacts.

The Responsible AI Compliance program

We knew that principles alone wouldn’t be enough to help us achieve this broad strategy. That’s why we reinforced our Responsible AI Compliance program with four essential elements that helped us activate ethical AI for real life usage:

  • Establish AI governance: Raised leadership awareness, established a governance structure, implemented principles, policy and standards, and set up an internal multi-disciplinary program team.

  • Conduct AI risk assessment: Conducted a preliminary risk assessment and regulatory/enforcement review and created screening and assessment processes.

  • Enable systematic RAI testing and program: Institutionalized our approach into a compliance program, implemented standards for purchasing AI, embedded controls into technology/processes/systems, and developed testing tools and persona-based training.

  • Ongoing monitoring and compliance of AI: Enabled ongoing monitoring and compliance through quality assurance programs, monitoring capabilities for compliance program effectiveness, and post-deployment use case compliance.

We took this program even further by enabling our employees with responsible AI training and developing mandatory ethics and compliance training for the 30,000 Accenture people most directly involved with AI.

This is truly an amazing time in the history of mankind. Our responsible use of AI will pave the way to build a better world for us and our future generations.

Arnab Chakraborty / Chief Responsible AI Officer, Accenture

A program that fits all businesses

Accenture’s own responsible AI journey has helped us become a valuable and transparent partner. In a world where consumers are four to six times more likely to buy, protect and champion purpose-driven companies, our journey is helping us use AI responsibly and is accelerating the path for others to do the same.

What’s more, Accenture’s experience can help businesses embrace the responsible use of AI. As companies, politicians, policy leaders, academicians and governments come together to determine how to create practical approaches and standards that will help manage against the risks of AI, Accenture stands ready to help. Together, we can realize AI’s potential to transform how we work and live and create better societies for all.