Thanks to their potential for beguiling simplicity, the uptake of AI technologies – by consumers, citizens, businesses, and governments – is likely to be fast. Unlike most previous technological step-changes – electricity, cars or computers, for instance – humans will often not need to significantly adapt or learn how to use AI. For AI designed to be intuitive, for example by allowing users to communicate with natural language, the learning curve will be much lower and adoption much quicker.

But that simplicity means we need to be all the more sensitive to the risks. Given the pace of technological development, Ray Eitel-Porter, Accenture’s Applied Intelligence Lead in the UK, and Dr. Brent Mittelstadt, Turing Research Fellow and British Academy Postdoctoral Fellow, University of Oxford discuss how we can best approach AI in a responsible and ethical way.

As Eitel-Porter explains, there have already been some infamous examples of algorithms which have got it badly wrong – with very serious consequences for those affected. That includes cases in the US where algorithms used to determine teacher performance and pay or guide sentencing decisions proved to be subject to racial bias or were otherwise coming to determinations that were simply incorrect.

He believes organisations need to be transparent in their use of AI to maintain trust and avoid bias (both in the algorithms created and the datasets used to train them), as well as offering a right of appeal against decisions taken by algorithm, accounting for security concerns, and considering how humans will take back control from an AI where necessary.

Organisations need to be transparent in their use of AI to maintain trust and avoid bias

Regulation – both imposed by governments and self-imposed by organisations themselves – will be a key part of this equation. Business must keep abreast of upcoming changes to the regulatory environment. But they must also – crucially – consider a self-regulation framework to support innovation, especially as the technology will likely outpace the regulators.

Accenture’s overriding approach to these issues is to put humans at the centre of the question and develop a framework for the responsible use of AI around them. For Eitel-Porter, perhaps the most important aspect of this framework is ‘explainability’. That is, a recognition that ultimately people will trust what they can understand.

However, Dr. Mittelstadt states that explaining an algorithm’s decision can be surprisingly difficult. That’s because machines learn in very different ways to humans. And their inner workings might not make obvious sense to a data scientist, let alone a customer or citizen affected by a decision. Moreover, even if a person could in theory understand the decision, they’d need access to the underlying system to do so. Even then, if they don’t know the provenance of the data used to train the system – and potential biases in it – they can’t really understand how the AI came to the decision it did. So this ‘opacity problem’ is multidimensional.

But it’s something which the Alan Turing Institute has been attempting to solve. And ‘counterfactual’ explanations are a highly promising research area. So, for example, imagine a bank customer is told an algorithm has denied their loan application. A counterfactual explanation would say something like “You were denied a loan because your income was £30,000 per year. It would have been accepted had your income been £45,000.” Multiple such explanations can be offered for each decision.

In this way, the customer not only understands key parameters that influenced the decision (income) but also has a means of challenging it (if it was based on a factually incorrect or unreasonable assessment) or can see a pathway to achieving a different result (the £45,000 threshold). Moreover, these counterfactual explanations neatly sidestep the need to understand the inner workings of the underlying system.

It’s a potentially elegant way to provide meaningful, easily understood, and practical information about algorithmic decision-making – a key part of developing a responsible and ethical approach to AI.

Ray Eitel-Porter

Managing Director – Accenture Applied Intelligence Lead UKI


Dr. Brent Mittelstadt

Turing Research Fellow and British Academy Postdoctoral Fellow – University of Oxford

MORE ON THIS TOPIC


Subscription Center
Stay in the Know with Our Newsletter Stay in the Know with Our Newsletter