Skip to main content Skip to footer

RESEARCH REPORT

From AI compliance to competitive advantage

5-MINUTE READ

June 30, 2022

In brief

  • As companies deploy AI for a growing range of tasks, adhering to laws, regulations and ethical standards will be critical to building a sound AI foundation.
  • 80% of companies plan to increase investment in Responsible AI, and 77% see regulation of AI as a priority.
  • Most companies (69%) have started implementing Responsible AI practices, but only 6% have operationalized their capabilities to be responsible by design.

The rewards of Responsible AI

In a recent report, The Art of AI Maturity, Accenture identified a small group (12%) of high-performing organizations that are using AI to generate 50% more revenue growth while outperforming on customer experience (CX) and Environmental, Social and Governance (ESG) metrics. Among other success factors that have a combinatorial impact on business results, these Achievers are, on average, 53% more likely than others to be responsible by design. That means that they apply a responsible data and AI approach across the complete lifecycle of all their models, helping them engender trust and scale AI with confidence.

Being responsible by design will become more beneficial over time, especially as governments and regulators consider new standards for the development and use of AI. Countries such as the United Kingdom, Brazil, and China are already taking action, either by evolving existing requirements related to AI (for example, in regulation such as GDPR), or through the development of new regulatory policy.

We surveyed 850 C-suite executives across 17 geographies and 20 industries to understand organizations’ attitudes toward AI regulation and assess their readiness to embrace it. Here’s what we learned.

The role of regulation

Our research shows that awareness of AI regulation is generally widespread and that organizations are well-informed.

  • Nearly all (97%) respondents believe that regulation will impact them to some extent.

  • 95% believe that at least part of their business will be affected by the proposed EU regulations specifically.

Interestingly, many organizations see regulatory compliance as an unexpected source of competitive advantage. The ability to deliver high quality, trustworthy AI systems that are regulation-ready will give first movers a significant advantage in the short-term, enabling them to attract new customers, retain existing ones and build investor confidence.

  • 43% think it will improve their ability to industrialize and scale AI.
  • 36% believe it will create opportunities for competitive advantage/differentiation.
  • 41% believe it can help attract/retain talent.

Our research also reveals that organizations are prioritizing AI compliance and want to invest. Coupled with the opinion that Responsible AI can fuel business performance, it’s unsurprising that majority of respondents plan to increase investment in Responsible AI.

  • 77% indicated that future regulation of AI is a current company-wide priority.
  • More than 80% say that they’ll commit 10% or more of their total AI budget to meeting regulatory requirements by 2024.

Responsible AI readiness

However, most organizations have yet to turn these favorable attitudes and intentions into action.

  • Alarmingly, we found that only 6% of organizations have built their Responsible AI foundation and put their principles into practice. Organizations in this category are prepared to accommodate near-term and ongoing regulatory changes. Because they’re responsible by design, these companies can move past compliance and focus on competitive advantage.
  • A majority of respondents (69%) have some dimensions in place but haven’t operationalized a robust Responsible AI foundation. This group understands the value of Responsible AI, but they have yet to embed it across their entire organization.    
  • Finally, 25% of respondents have yet to establish any meaningful Responsible AI capabilities. This group will have the most work to do to prepare their organizations for regulatory change.

While most companies have begun their Responsible AI journey, the majority (94%) are struggling to operationalize across all key elements of Responsible AI.

  • 62% - Have developed a governance framework
  • 47% - Have defined an AI risk management framework
  • 39% - Have begun to implement risk mitigation tools and techniques

The question becomes: why? We identified a few primary barriers.

The biggest barrier lies in the complexity of scaling AI responsibly — an undertaking that involves multiple stakeholders and cuts across the entire enterprise and ecosystem. Our survey revealed that nearly 70% of respondents do not have a fully operationalized and integrated Responsible AI Governance Model. As new requirements emerge, they must be baked into product development processes and connected to other regulatory areas, such as privacy, data security and content.

Additionally, organizations may be unsure what to do while they wait for AI regulation to be defined. Uncertainty around rollout process/timing (35%) and the potential for inconsistent standards across regions (34%) were the largest concerns in relation to future AI regulation. This lack of clarity can lead to strategic paralysis as companies adopt a “wait and see” approach. As experienced with GDPR, reactive companies have little choice but to be compliance-focused, prioritizing the specific requirements rather than the underlying risk, which can lead to problems down the road…and value left on the table.

Consider these common challenges:

Challenge #1

Responsible AI is cross-functional, but typically lives in a silo.

Most respondents (56%) report that responsibility for AI compliance rests solely with the Chief Data Officer (CDO) or equivalent, and only 4% of organizations say that they have a cross-functional team in place. Having buy-in and support from across the C-suite will establish priorities for the rest of the organization.

Challenge #2

Risk management frameworks are a requirement for all AI, but they aren’t one-size-fits-all.

Only about half (47%) of the surveyed organizations have developed an AI risk management framework. What’s more, we learned that 70% of organizations have yet to implement the ongoing monitoring and controls required to mitigate AI risks. AI integrity cannot be judged at a single point in time; it requires ongoing oversight.

Challenge #3

There is power in the AI ecosystem, but you’re only as strong as your weakest partner.

AI regulation will require companies to think about their entire AI value chain (with a focus on high-risk systems), not just the elements that are proprietary to them. 39% of respondents see one of their greatest internal challenges to regulatory compliance arising from collaborations with partners, and only 12% have included Responsible AI competency requirements in supplier agreements with third party providers.

Challenge #4

Culture is key, but talent is scarce.

Survey respondents reported that they lack talent who are familiar with the details of AI regulation, with 27% citing this as one of their top three concerns. Plus, more than half (55.4%) do not yet have specific roles for Responsible AI embedded across the organization. Organizations must consider how to attract or develop the specialist skills required for Responsible AI roles — keeping in mind that teams responsible for AI systems should also reflect a diversity of geography, backgrounds and ‘lived experience’.

Challenge #5

Measurement is critical, but success is defined by non-traditional KPIs.

The success of AI can’t be solely measured by traditional KPIs such as revenue generation or efficiency gains, but organizations often fall back on these traditional benchmarks and KPIs. In 30% of companies, there are no active KPIs for Responsible AI. Without established technical methods to measure and mitigate AI risks, organizations can’t be confident that a system is fair. To our previous point, specialist expertise is required to define and measure the responsible use and algorithmic impact of data, models and outcomes — for example, algorithmic fairness.

Responsible AI is cross-functional, but typically lives in a silo.

Most respondents (56%) report that responsibility for AI compliance rests solely with the Chief Data Officer (CDO) or equivalent, and only 4% of organizations say that they have a cross-functional team in place. Having buy-in and support from across the C-suite will establish priorities for the rest of the organization.

Risk management frameworks are a requirement for all AI, but they aren’t one-size-fits-all.

Only about half (47%) of the surveyed organizations have developed an AI risk management framework. What’s more, we learned that 70% of organizations have yet to implement the ongoing monitoring and controls required to mitigate AI risks. AI integrity cannot be judged at a single point in time; it requires ongoing oversight.

There is power in the AI ecosystem, but you’re only as strong as your weakest partner.

AI regulation will require companies to think about their entire AI value chain (with a focus on high-risk systems), not just the elements that are proprietary to them. 39% of respondents see one of their greatest internal challenges to regulatory compliance arising from collaborations with partners, and only 12% have included Responsible AI competency requirements in supplier agreements with third party providers.

Culture is key, but talent is scarce.

Survey respondents reported that they lack talent who are familiar with the details of AI regulation, with 27% citing this as one of their top three concerns. Plus, more than half (55.4%) do not yet have specific roles for Responsible AI embedded across the organization. Organizations must consider how to attract or develop the specialist skills required for Responsible AI roles — keeping in mind that teams responsible for AI systems should also reflect a diversity of geography, backgrounds and ‘lived experience’.

Measurement is critical, but success is defined by non-traditional KPIs.

The success of AI can’t be solely measured by traditional KPIs such as revenue generation or efficiency gains, but organizations often fall back on these traditional benchmarks and KPIs. In 30% of companies, there are no active KPIs for Responsible AI. Without established technical methods to measure and mitigate AI risks, organizations can’t be confident that a system is fair. To our previous point, specialist expertise is required to define and measure the responsible use and algorithmic impact of data, models and outcomes — for example, algorithmic fairness.
While there’s no set way to proceed, it’s important to take a proactive approach to building Responsible AI readiness to overcome or avoid the barriers above.

Becoming responsible by design

Based on our experience helping organizations across the globe scale AI for business value, we’ve defined a simple framework to help companies become responsible by design. This framework consists of four key pillars:

  • PRINCIPLES AND GOVERNANCE - Define and articulate a Responsible AI mission and principles (supported by the C-suite), while establishing a clear governance structure across the organization that builds confidence and trust in AI technologies.
  • RISK, POLICY AND CONTROL - Strengthen compliance with stated principles and current laws and regulations while monitoring future ones, develop policies to mitigate AI risk and operationalize those policies through a risk management framework with regular reporting and monitoring.
  • TECHNOLOGY AND ENABLERS - Develop tools and techniques to support principles such as fairness, explainability, robustness, accountability and privacy, and build these into AI systems and platforms.
  • CULTURE AND TRAINING - Empower leadership to elevate Responsible AI as a critical business imperative and provide all employees with training to give them a clear understanding of Responsible AI principles and how to translate these into actions.

Organizations can use this framework to inform a Responsible AI foundation that allows them to quickly assess the impact of any new regulation and respond to compliance requirements without starting from scratch each time.

All roads lead to responsibility

Scaling AI can deliver high performance for customers, shareholders and employees, but organizations must overcome common hurdles to apply AI responsibly and sustainably. While they’ve historically cited lack of talent and poor data quality/availability as their biggest barriers to AI adoption, “managing data ethics and responsible AI, data privacy and information security” now tops the list.

  • 57% - of our survey respondents believe AI is a critical enabler of their strategic priorities.

Being responsible by design can help organizations clear those hurdles and scale AI with confidence. By shifting from a reactive AI compliance strategy to the proactive development of mature Responsible AI capabilities, they’ll have the foundations in place to adapt as new regulations and guidance emerge. That way, businesses can focus more on performance and competitive advantage.

WRITTEN BY

Ray Eitel-Porter

Managing Director – Applied Intelligence, Global Lead for Responsible AI

Ulf Grosskopf

Managing Director – Accenture Strategy, Data for Growth