Skip to main content Skip to Footer

BLOGS


September 03, 2018
Can AIs make decisions that are accurate AND fair?
By: Amit Bansal

It’s pretty clear that artificial intelligence systems (AIs) don’t always make good decisions.

Every day in Australia, AIs make hundreds of thousands of decisions that can deeply affect us. For years, marketing algorithms have been spitting out lists of potential customers to target. But, now, more complex models are deciding on, not just our welfare benefits, but our insurance claims, mortgage applications, tax determinations, credit-worthiness – and even our job applications.

In the next few years, AIs will make more and more autonomous decisions on behalf of business and government. As they become embedded in the framework of our lives, we must get better at ensuring AIs make fair, unbiased decisions. Because, despite data scientists’ best efforts, advanced algorithms are getting things wrong and unwittingly leading to discrimination and false accusations.

And people are getting angry.

Last year, teachers in America won a lawsuit with their school district over an AI that assessed their performance by comparing students’ test scores against state averages. Those who rated badly faced the sack. Some teachers felt the system marked them down without good reason, but they had no way of checking if the program was fair because the company that built the software wouldn’t disclose its workings. They said it was a “trade secret”. A Federal Judge ruled that use of the AI may have violated the teachers’ civil rights. The school settled the case – and stopped using the software.

But, removing bias is harder than you think

Take the task of assessing a job application, where an algorithm is used to pre-screen hundreds of CVs. Most organisations are smart enough to remove gender, age and name to avoid the obvious biases. But they often leave in postcodes to find people living within commuting distance of the business. The problem is the algorithm doesn’t understand that Sydney’s 2000 postcode is skewed to the very wealthy and renting foreign visitors. Already, this model has an inherent bias – or leakage.

And the answer isn’t always as simple as removing the postcode data. Because, as you strip out more and more vectors to remove bias, the accuracy of your model starts to deteriorate. At that point, you may need to add in new data – certain skills – to get a better outcome.

Adjusting AI integrity

During the development process, data scientists should define, measure and increase the fairness of a new AI application by addressing three issues:

  1. Data bias – Look at the data that feeds an AI tool. Hopefully, the obvious bias factors have been removed. But have you left in sensitive variables that can skew other variables? For example, gender often correlates with profession. Even if a company removes gender from the data set, it leaves a residual impact if profession is still part of the set.

  2. Algorithmic fairness – Are all people of the same income group or race treated the same, in terms of false positives and false negatives? If not, the model should be adjusted for equalised impact, making sure people are being treated fairly.

  3. Accuracy – Adjusting for bias and fairness inevitably causes an accuracy trade-off. But how bad is it? In the UK, a credit card company removed the attributes causing bias and leakage from its credit assessment tool. Yes, there was some drop in accuracy. But, overall, the outcome was better. The company decided to live with a less accurate model that makes fairer decisions.

Soft skills are critical

Importantly, when fairness issues arise, we need to look beyond the people who build the models to find the solution. Data scientists have their own special breed of brilliance. But you need another type of person to think about why and how organisations will use AI to make decisions. These people will have the core skills of empathy, creativity, strategic thinking and commercial acumen.

To this point, a recent study of Google's top employees showed their most important qualities were soft skills: communication, empathy and the ability to make connections across complex ideas. STEM (science, technology, engineering and mathematics) expertise came last.

Transparency is key

The more AIs make critical decisions, the more organisations will be called on to defend their judgement. Whether in the boardroom or in court, we will have to unravel algorithms for non-technical audiences. In a simple model, like the Centrelink algorithm, that’s easy to do. But when you have a neural network, with lots of interdependent models, it becomes very complex.

The good news is that many organisations are investing in finding solutions to combat AI bias, with first-generation fairness tools already on the market. Over time, human beings will figure this out – and we will gain comfort with and confidence in the AIs making decisions that affect our lives.

In the meantime, we must be vigilant – continually testing for unfairness and bias creeping into the ever expanding universe of AI tools.


RELATED CONTENT

Popular Tags

    More blogs on this topic

      Archive