Accenture Global Data Science Lead Fernando Lucini says there are clear steps for how organisations can ensure they adopt AI in a way that is responsible by default.
You see a lot of photos on the internet of disastrous DIY projects where people don't have the skills, or haven’t fully considered the consequences and as a result, end up failing catastrophically. The outcome would be very different for, say, an architect. You absolutely would not have built your little gazebo with no foundation so that when the rain comes it sinks down into the sand.
Unintended consequences are what happen when people, or organisations, are not consistent and professional in the way they deal with things. When it comes to powerful technology like Artificial Intelligence, the potential negative consequences of taking the amateur approach versus the architect’s are massive. Responsible AI means professionalising the field in a way that puts us all on the path towards strengthening what we build so that we don't create systems that bring up bad data, perpetuate bias, or lead to unethical uses.
Responsible AI is all about creating a framework that ensures we use the technology in ways that are ethical, transparent and accountable. Doing that, in turn, helps us and our clients better meet our customers’ expectations, our organisation’s values and the greater needs of society.
As we scale AI, we need to consciously build in ways of making ethics and responsibility a natural part of how you do things. Many of our clients, with our guidance, are putting in place ethics boards as part of a complete lifecycle for AI and Data Science. That means at the right times in the process, these boards – which represent a broad set of skills and disciplines – guide the work looking at it from all critical dimensions. In some cases, these teams might include academics, community members, elected officials and so on.
Our advice is clear: go to a diverse and organised panel and explain to them what you want to do. Banks do it for risk. Why wouldn't you do it for ethics?
It’s part of putting up guardrails to ensure there’s a thinking process behind your choices and that you are extrapolating to pull in other considerations you might not have thought about. This move towards professionalisation is key – otherwise we're stuck. That’s why at Accenture we’ve developed methods – and follow those methods – to identify and avoid unexpected consequences.
For us, it’s about balancing the technological with the human side of things. By combining human-centric design and social and behavioral sciences, you drive more responsible business practices. The better the blend of skills, the more sustainable the end result.
It’s about having safe guards that force you to ask questions like: what kind of problems am I going to solve? Are there other problems our stakeholders and citizens want us to solve? When we look at the data, is it being biased? Is there fairness in what I'm doing with regards to the data and the algorithm? Is the peer review being biased by people who all look the same and think the same?
People need to be told what is expected of them and by building in that human lens you can ensure innovations are balanced with fairness. That is how organisations can make ethics and responsibility a natural part of how they do things.
Read how we are creating positive impact in the UK for our people, our communities and our planet.
Copyright © 2020 Accenture. All rights reserved
This document makes descriptive reference to trademarks that may be owned by others. The use of such trademarks herein is not an assertion of ownership of such trademarks by Accenture and is not intended to represent or imply the existence of an association between Accenture and the lawful owners of such trademarks.