The public debate about AI has tended to focus on the more alarmist soundbites reflecting concerns about automation’s impact on jobs. The data presents a mixed picture. The OECD predicts an average 14 percent impact on jobs. At the same time, we’re seeing unemployment rates at critical lows in many countries. However, many of those jobs are ‘low quality’. As we move into the AI era, the principal challenge is how to develop the new relevant skills and talent that will be needed.
So, who’s responsibility is this? Government, business or citizens? In fact, a community response involving all three is the only answer. Interest and investments in AI are soaring. Governments are using the number of AI start-ups they can attract as a benchmark for success. At the same time, a recent Accenture survey across 10 countries showed just 3 percent of companies want to invest more in reskilling their people. Only 25 percent of CXOs think their workforce is AI-ready. Yet employees are growing impatient and want to start working with smart new technologies. So how can countries move ahead to adapt to AI, capture its value while helping its benefits to flow more equally through society?
At CogX 2018 I discussed these fundamental questions with Stephen Hennigan, from the UK government’s Office for AI, digital and technology policy and Svenja Falk, Managing Director of Research at Accenture. Steven explained how the UK government “is focused right now on bringing the various communities together. We’ve dedicated a number of bodies to take responsibility for supporting AI’s growth and development in a collaborative framework that is supporting both the growth if AI and broader questions of trust and responsibility. Once these initiatives are bedded down, we will have a much more coordinated AI landscape in the UK.”
For Stephen, a key issue must not be seen as standalone development. Instead, it’s a universal like electricity or the internet that will affect everyone and everything. It’s essential then to use events like CogX to extend conversations about AI into new areas. As Steven put it “That’s what we are trying to do in government. For example, the Prime Minister has announced a new goal to use AI in health diagnostics that could prevent up to 22,000 deaths a year. It’s a great example of how everyone can get involved to think about how AI can improve activity in their own sector.”
Critically, regulation needs to be built from the bottom up to evolve with the technology. That means government and business working with existing to regulators to increase their understanding of AI’s impact and what it means for people. One of the key roles for government that Steven drew out is the ability to inspire confidence among smaller businesses and the public. It’s essential that they feel empowered to explore AI, put it to work and in doing so boost UK growth.
Putting AI to work where it can generate the most value is the stage many businesses are now reaching. The age of experimentation is drawing to a close as businesses develop a more granular understanding of the technologies under the broad banner of AI – and a clearer picture of the value that’s available. Hype is giving way to pragmatism, with more targeted and bolder use cases starting to emerge. And in some industries, like finance, where applications are being pushed further than most, there’s a clear need for companies to work with regulators to help them keep pace with developments.
Use cases are becoming more sophisticated, as is the use of AI to target propositions at consumers. This raises questions of bias (in the data and the algorithms) that need to be addressed to allay society’s concerns and demonstrate responsible deployment of the technologies. It’s where regulators could add value by providing guardrails within which developments can be rolled out.
Multi-party dialogue between government, businesses and citizens will play a vital role in AI’s evolution from now on. It’s the only way to understand deep-rooted concerns and put in place the trusted legal frameworks that will drive responsible AI and, in doing so, ensure the UK is an attractive centre for innovation in this space for years to come.
After all, at a macro level, we have all the right building-blocks in place – great schools and colleges, great skills and vibrant developer communities. What’s needed to complete the equation is investment capital. And that will really start to flow when we’ve established a rock-solid foundation of clarity (and clear boundaries) for how, when and where AI is used.