Skip to main content Skip to footer

CASE STUDY

Accenture’s blueprint for responsible AI

How we have operationalized ethical AI in our company and in our client work

5-MINUTE READ

Generative artificial intelligence is transforming how we work and live

Whether it’s mimicking human dialogue or creating original images in an instant, generative artificial intelligence (AI) is reshaping our approach to many everyday tasks.

But this powerful technology hides a hard truth—while its potential for good is limitless, so are the consequences of its misuse. For example, consider the copyrighted intellectual property shared publicly on your company’s website. Should another entity be allowed to crawl your assets to train their large language model, even if it’s for non-profit use?

As leaders navigate the risks associated with AI, the number one question they’re asking is “How do I govern AI in a responsible manner? How can I activate its value, mitigate its risks, and build trust with my customers, my employees and my shareholders?”

The vast majority (96%) of organizations support some level of government regulation around AI, but only…​

2%

of companies have self-identified as having fully operationalized responsible AI across their organization

Making Responsible AI vital and embedded throughout the organization

Just like other forms of AI, every opportunity offered by generative AI comes with its own set of risks. It’s vital that every organization—including yours—scale this technology in responsible, ethical ways. It’s also essential to put AI governance and the responsible use of AI into practice to mitigate any potential risks—including bias, hallucinations, workforce transformation and displacement, or even cyberattacks.

We’ve done this ourselves through our Responsible AI Compliance program. The program rests on a set of principles based on Accenture’s core values and our overarching Code of Business Ethics. We apply these principles to the AI systems we design and build for internal use and the work we do with clients, partners and suppliers. Accenture's Responsible AI principles are:

  • Human by design: Understand AI’s impact on humans. Ensure that impacts are identified and managed.

  • Fairness: All models should treat all groups equitably. Action is needed to mitigate bias and other consequences.

  • Transparency / Explainability / Accuracy: Disclose AI use. Ensure all can understand and evaluate AI outputs and decision-making processes.

  • Safety: Evaluate potential concerns. Take action to mitigate harm when deploying AI.

  • Accountability: Document enterprise-wide governance structures, with clear roles, policies, and responsibilities.

  • Compliance / Data Privacy / Cybersecurity: Ensure AI complies with relevant laws, AI is secured against cyber-attacks, and data is secured with privacy protection.

  • Sustainability: Consider the impact on the planet. Deploy with steps to mitigate any impacts.

The Responsible AI Compliance program

How do we act on this broad strategy? The Responsible AI Compliance program includes four essential elements that helped us activate ethical AI for real life usage:

  • Establish AI governance: Raised leadership awareness, established a governance structure, implemented principles, policy and standards, and set up an internal multi-disciplinary program team.

  • Conduct AI risk assessment: Conducted a preliminary risk assessment and regulatory/enforcement review and created screening and assessment processes.

  • Enable systematic RAI testing and program: Institutionalized our approach into a compliance program, implemented standards for purchasing AI, embedded controls into technology/processes/systems, and developed testing tools and persona-based training.

  • Ongoing monitoring and compliance of AI: Enabled ongoing monitoring and compliance through quality assurance programs, monitoring capabilities for compliance program effectiveness, and post-deployment use case compliance.

A Responsible AI compliance program will also need to engage cross functionally to address workforce impact, compliance with laws, sustainability, privacy/ security programs across the enterprise.

This is truly an amazing time in the history of mankind. Our responsible use of AI will pave the way to build a better world for us and our future generations.

Arnab Chakraborty / Chief Responsible AI Officer, Accenture

A program that fits all businesses

Accenture’s own responsible AI journey has helped us become a valuable and transparent partner. In a world where consumers are four to six times more likely to buy, protect and champion purpose-driven companies, our journey is helping us use AI responsibly and is accelerating the path for others to do the same.

For instance, we helped a global retail and pharmacy giant integrate AI strategically and responsibly across its business—mapping AI development across the enterprise, enhancing its ethical AI governance model, and building the responsible AI foundations it needs to use and scale AI across the business.

But it’s not just businesses that must embrace responsible AI. There is a need for active collaboration between businesses, politicians, policy leaders, academicians and governments. All parties must come together and determine how we can create practical approaches and standards, guardrails that will help manage against the risks of AI. Only then can we begin to realize AI’s potential to transform how we work and live and create better societies for all.