Build responsibility into your AI

How do we learn to trust AI? Responsible AI builds trust and lays the foundation for successful scaling by taking a "human first" approach—using technology to help people make better decisions, while keeping them firmly accountable through the right governance processes and technical steps. Our AI:Built to Scale research says responsibility is more than a "nice to have"—with AI Strategic Scalers significantly more likely to brief their employees clearly on how they tackle responsible AI.

You see the value in AI … but how do you trust it?

AI affords tremendous opportunities, from increasing efficiencies and improving outcomes, to reimagining industries altogether. Against this backdrop, it’s easy to forget that AI’s decisions also have a real bearing on people’s lives, raising some big questions around ethics, trust, legality and responsibility. Enabling machines to make decisions may expose a business to significant risks, including reputational, employment/HR, data privacy, health and safety issues.

Enter: Responsible AI. It’s a topic that’s becoming pervasive in the media and a real consideration for clients in the public and private sectors.

What happens when a machine's decision turns out to be erroneous or unlawful? The potential fines and sanctions could threaten the business’s commercial sustainability. And what about other unintended consequences? AI has already shown it can be biased in ways that weren’t anticipated and can hurt a brand’s reputation. AmazonSM, for instance, had to scrap its AI-based recruiting tool that appeared to show bias against women. And if need be, how does a human know when to intervene in a process driven by a machine?

Design trust into how you operate AI

The Board of Directors needs to know what obligations it owes to its shareholders, employees and society at large, to ensure AI is deployed without unintended consequences.

The CEO might be asking, how can I be assured we have thought through AI’s possible brand and PR risks? Meanwhile, the Chief Risk Officer and Chief Information Security Officer need to be thinking: If we deploy AI, how can we do it in a way that complies with data protection regulations? Creating a robust ethical underpinning for AI allows you to "design out" legal and ethical concerns to the extent that it is possible.

However, it's not just about establishing the appropriate governance structures. It’s also important to translate those ethical and legal frameworks into statistical concepts that can be unambiguously represented in software.

So, where to begin?

First, ensure considerations for AI are built into your core values and robust compliance processes. Then, you will need to implement specific technical guidelines to make sure that the AI systems are safe, transparent and accountable to protect your employees, clients, civilians, and other organizations.

Next, identify new and changing roles, and put the right training in place for technology specialists and your diverse team of experts to understand their new roles and remit.

All of these elements are part of an innovation-friendly blueprint for Responsible AI that you can apply across functions and projects—allowing you to understand and manage the ethical implications of everything you do.

Put ethics at the core to build and retain trust

Design in ethical frameworks when you’re planning AI. We program algorithms to give us exactly what we have asked for, so we shouldn’t be surprised when they do. And the problem is that simple algorithms treat all data as immutable, even data about our preferences, income and life situation. What can happen then, is that algorithms can trap people in their origins, history or a stereotype. These "bad feedback loops" can lead to negative impacts on society.

The issues mentioned are not inherent to machine learning algorithms themselves. Instead, issues arise from the way they interact with society and the unintended consequences that can result from those interactions. As such, putting the ethical implications at the heart of the development of each new algorithm is vital.

Just as data privacy and cyber security have moved from department to board-level issues, responsible governance of AI must be quickly elevated in importance by all organizations that use it.

Dr. Athina Kanioura

Chief Analytics Officer and Global Lead – Applied Intelligence


Fernando Lucini

Managing Director – Artificial Intelligence lead, Accenture UKI

MORE ON THIS TOPIC

AI: Built to scale
How to stop AI from reinforcing biases
An AI governance approach that supports innovation

Subscription Center
Stay in the Know with Our Newsletter Stay in the Know with Our Newsletter