Skip to main content Skip to Footer

BLOG


Digital perspectives
New views. Applied now.

Digital Perspectives

New Views. Applied Now.

June 06, 2018
Tackling the challenge of ethics in AI
By: Rumman Chowdhury

During any conversation about AI these days, it’s not long before someone raises the question of ethics. Answers to that question, however, are less easy to find. No surprise. It’s really complex territory to navigate from a regulatory and societal perspective. And the technology itself is evolving so rapidly that the ethical debate is constantly playing catch-up with new developments and applications. But hard as it might be, it’s also one of the most pressing issues society faces today. In short, how can we make sure we deploy AI ethically and responsibly?

Why has this subject recently become so prominent? In a world in which AI influences so many different aspects of our lives, the decisions it makes—and the data it uses to make them—are having real are having real-world impacts. Whether that’s controversies about fake news, the use of personal data, or digital assistants acting beyond their normal parameters, AI ethics-related stories are making headlines.

All organisations pursuing AI initiatives (and that’s pretty much all organisations) really have to grapple with the potentially damaging consequences of ethical lapses or failures. With such intense public and regulatory scrutiny, it’s no longer simply enough to tick the right boxes. Doing the right thing, and doing it visibly, are both now imperatives.

Why is this such a massive issue? It’s the nature of AI itself. Unlike traditional technologies, whose outputs are completely traceable to their inputs, self-learning machines do not play by the same rules. With AI, it’s not possible to know whether outputs will conform to expectations.

AI making potentially life-changing decisions about employment, matchmaking or finance may unintentionally reflect unacknowledged biases in an algorithm (or dataset). As it continues to learn, it may express those biases in profoundly undesirable and ethically unacceptable decisions. Think about the widely covered decisions made by an algorithm used by a US prison service to determine parole eligibility. It displayed a clear bias against particular ethnic groups.

So, how do we go about tackling this potential unfairness as we develop ever more sophisticated AI products and services? Or put it another way, how can we help companies to be their best selves and deploy AI so that people will trust it and feel comfortable using it, confident that if something bad happens they’ll have redress?

This is where our new "Fairness Tool"—which we developed on the back of a Data Study Group carried by a group of talented early career researchers at the Alan Turing Institute and are now launching at CogX—comes into play’ First, to scrutinise the data that goes into an algorithm. That data may include sensitive variables, such as race or gender, that can influence a decision. But it’s not as simple as excluding those discrete variables from the algorithm.

Typically, we’re dealing with multiple variables that influence each other. So what our tool does is identify and remove any coordinated influence that may lead to an unfair outcome. Rather than simply highlighting a problem in your data, it also fixes it.

The second area our tool addresses is the actual algorithmic model. For example, it looks at false positives and false negatives. Our tool is able to look at different groups, defined by any particular parameter, and make sure that false positive and false negative rates are fairly distributed. Here again, we’re not just highlighting a problem. We’re also providing the means to resolve it.

But of course, introducing fairness may also reduce a model’s accuracy. In response, the tool also shows to what extent that may have happened. It puts you in control of deciding how much of a trade-off you’re willing to accept. Overall, our Fairness Tool will help all organisations to root out ethical problems and head them off before they have the chance to inflict any harm.

See it in action in our pavilion at CogX.

Many thanks to those involved in the prototyping of this tool who joined the Data Study Group: Peter Byfield, University of Warwick; Paul-Marie Carfantan, LSE; Omar Costilla-Reyes, University of Manchester; Quang Vinh Dang, INRIA, France; Delia Fuhrmann, University of Cambridge; Jonas Glesaaen, Swansea University ; Qi He, UCL; Andreas Kirsch, Newspeak House; Julie Lee, UCL; Mohammad Malekzadeh, Queen Mary University of London; Esben Sorig, Goldsmiths University of London; Emily Turner, University of Manchester.

Popular Tags

    More blogs on this topic

      Archive