Responsible AI in practice — essential but not easy
Despite the real value organizations can achieve through Artificial Intelligence (AI), many still struggle to address the risks associated with it.
In a global survey of risk managers, 58% identify AI as the biggest potential cause of unintended consequences over the next two years. Only 11% say they’re fully capable of assessing risks associated with organization-wide AI adoption.
Bias, discrimination, fairness, and explainability are areas of paramount concern. And while there are some specific definitions for these problem areas, translating them into action involves tough decisions and application-specific constraints.
In response, many enterprises have started to professionalize their approach to AI and data. Those that have put in place the right structures from the start, including considering Responsible AI, are able to scale with confidence, achieving
nearly three times the return on their AI investments when compared to those that have not.
But many organizations still struggle to scale Responsible AI proofs of concept across their live processes. So what are the challenges? And how can they overcome them —
and move from principles to practice?
Practitioner insights: The realities of Responsible AI
To answer these questions, we spoke to Responsible AI practitioners (data scientists, AI engineers, lawyers, industrial/organizational psychologists and project managers) from 19 organizations, across four continents. Our analysis indicates that some organizations have struggled to develop a systematic internal approach to convert principles into practice. And our experience shows this is because they underestimate the technical complexity and scale of people and process change required.
The four pillars of Responsible AI
Organizations need to tackle a central challenge: translating ethical principles into practical, measurable metrics that work for them. To embed these into everyday processes, they also need the right organizational, technical, operational, and reputational scaffolding. Based on our experience delivering Responsible AI solutions to organizations worldwide, we’ve defined four pillars of successful Responsible AI implementations. For each pillar, we’ve summarized key practitioner pain points, recommended how to address them, and included a case study showing what good looks like.
Democratize new ways of working and facilitate human+machine collaboration.
Practitioner pain points: Practitioner interviewees highlighted the need for appropriate performance metrics and recognition for their work. They indicated that too little value was placed on risk mitigation, including prevention of reputational harm. And time pressures often mean short-term product success was prioritized by leadership over Responsible AI’s long-term benefits.
Recommendations: Strong leadership is pivotal to empower employees and elevate Responsible AI as a business imperative. To democratize this way of working, successful organizations recognize the need for new roles, and actively upskill, re-skill, or hire.
Organizations should nurture cultures that empower individuals to raise concerns over AI systems, without stifling innovation. Clear success criteria, incentives, and training are all critical requirements.
Case study: To learn how we helped a leading European financial services company put Responsible AI organizational principles into practice, download our full report.
Set up governance and systems that enable AI to flourish.
Practitioner pain points: Interviewees indicated that companies struggle with stakeholder misalignment, bureaucracy, conflicting agendas, and a lack of clarity on processes/ownership. Individuals often operate ad hoc, following their own values and assessments. Insufficient clarity on governance and accountability, unnecessary conflicts, and competing incentives across groups ultimately led to Responsible AI inertia and a reactive mindset.
Recommendations: Establish transparent, cross-domain, governance structures. These build internal confidence and trust in AI technologies by identifying roles, expectations and accountabilities. Creating a cross-domain ethics committee early on can prove invaluable. By defining roles, ways of working and authority to govern, they maintain ongoing procedures and enable on-demand responses.
Case study: To learn how we helped a global communications vendor put Responsible AI operational principles into practice, download our full report.
Make systems and platforms trustworthy and explainable by design.
Practitioner pain points: One of the biggest barriers that practitioner’s organizations faced was a lack of expertise in defining and measuring ethical use and algorithmic impact of data, models and outcomes. Without established technical methods to monitor and mitigate these risks, organizations can’t be confident that a system is fair. Companies struggle to integrate academic metrics like algorithmic fairness. Responsible AI cannot be measured in revenue generation or click-thru rates, but many businesses still rely on these traditional performance metrics to track practitioners’ success.
Recommendations: Architect and deploy AI models, systems, and platforms that are trustworthy, fair and explainable by design. Proven qualitative and quantitative techniques for assessing potential risks equip organizations to reach cross-domain consensus on mitigation.
Invest time in fully understanding the sources of bias. This leads to better informed resolution strategies that match both the organization and the application. Having the right tools to investigate sources of bias and understand the impacts of fairness decisions is critical.
Case study: To learn how we helped Allied Irish Bank put Responsible AI technical principles into practice, download our full report.
Articulate the Responsible AI mission and ensure it’s anchored to company values and ethical guardrails.
Practitioner pain points: Without the right organizational, operational, and technical foundations, interviewees reported that organizations are forced into reactive approaches to Responsible AI, which expose them to significant reputational damage.
Recommendations: Leading organizations clearly articulate their Responsible Business mission, anchored in their values. Ongoing measurement and monitoring of key Responsible AI metrics ensures they’re managing risk and communicating with transparency. Internal stakeholders may doubt the value of ethical principles, but successful organizations embrace these sceptics and the fresh perspective they bring, which encourages the core team to pressure-test the principles they’re defining. The result is often a more considered set of principles in which everyone has confidence.
Case study: To learn how we helped a major government agency put Responsible AI reputational principles into practice, download our full report.
The mission-critical journey to Responsible AI
Successful organizations take a systematic approach to Responsible AI from the start. They understand the scale and complexity of change required. And they address challenges in parallel.
Like this, they move from principles to practice with confidence, support the professionalization of AI enterprise-wide, and put in place the structures needed to prove the long-term value of Responsible AI.
Our conversations identified a range of organizational, operational, technical and reputational challenges that hold well-intentioned organizations back. While the initial focus is often on ethical and legal requirements, success is also a function of an organization’s ability to modify its traditional ways of working to support Responsible AI—and AI more broadly. In undertaking this process, organizations also establish the structures needed to demonstrate the long-term value of Responsible AI by scaling it across the organization and enable the essential move from “practice to proof.”
We use a set of 25 questions to help our clients to benchmark their motivators and challenges, together with their maturity in terms of people, process and technology against their peers. Where are you on your Responsible AI journey?