An AI governance approach to support innovation
February 8, 2019
February 8, 2019
Artificial intelligence is developing so quickly that standardized responsible practices—set out in legislation, regulation or industry codes—may take many years to adequately govern intelligent technologies. In the meantime, organizations must self-regulate to ensure AI is used responsibly. But what should this self-regulation look like? And can it work effectively without blocking innovation?
People have been playing the strategic board game Go for 2,500 years. In 2016, Google Deepmind’s AI-powered Go player, AlphaGo, defeated the human world champion, Lee Sedol.
At a pivotal moment in the match, AlphaGo made a bizarre move. "It's not a human move,” a fellow professional Go player said, “I've never seen a human play this move." But it helped AlphaGo to win.
AI is full of such surprises, not all of them as “beautiful” as AlphaGo’s new strategy. Amazon, for example, tried to remove the algorithmic biases in their machine learning recruitment tool that developed a bias against female candidates, but unfortunately was unable to and had to abandon its use. Recent events like this might explain why, in our global research survey of business leaders, we found that 88 percent of respondents do not have confidence in AI-based decisions and outputs.
AI is not just less predictable than traditional computer programming; it is also newer (to most organizations) and already transforming many industries. In the same global survey, we found that 85 percent of business leaders expect AI to open up new products, services, business models and markets, while 78 percent expect AI to disrupt their industry in the next 10 years.
Since AI is both powerful and, at times, unpredictable, there has been growing interest in the responsible governance of AI applications. Businesses are wary of the possible, unintended consequences when working with AI, and have ramped up this interest. But the concern is that the wrong kind of regulation can stifle innovation and hold back the benefits of AI projects. In other words, there is an immediate need for good AI governance that allows for innovation to flourish.
88%
of respondents to an Accenture global research study do not have confidence in AI-based decisions and outputs.
78%
of respondents to an Accenture global research study expect AI to disrupt their industry in the next 10 years.
Since AI is both powerful, and at times, unpredictable, there has been growing interest in the responsible governance of AI applications.
There are two main ways to approach AI governance: the police patrol and the fire warden.
Police patrols are the effect of AI governance rules applied from the top down. Organizations monitor people and detect violations of the rules in order to enforce compliance. This model seems like a straightforward option, but it tends to stifle innovation. Teams see it as a barrier, rather than their shared responsibility.
The fire warden model is different. It embeds skills within teams that help them to escalate issues that need attention—much like training fire wardens to raise the alarm and then carry out the necessary safety actions.
In general, we favor the latter approach for AI governance because it supports innovation and the agile development that is crucial to the competitiveness of today’s businesses—and it can evolve more easily alongside fast-changing AI technology.
It’s critical to ensure the teams have the right people, processes and training.
The fire warden model gives teams a high level of responsibility and agency over outcomes. That means that it’s critical to ensure the teams have the right people, processes and training. Getting all this right starts with three key practices:
At the heart of it, the fire warden approach depends on trust. Trust needs to be strong vertically, between leaders and workers, and horizontally, across teams.
At the heart of it, the fire warden approach depends on trust.
Responsible, sustainable success with AI comes down to raising awareness at an individual level and empowering those individuals to act wisely. This gives us the best chance of supporting effective, agile governance and AI innovation, while helping us anticipate risks early enough to snuff out the sparks.