Skip to main content Skip to Footer
February 12, 2018
Creating smarter Artificial Intelligence by eliminating bias
By: Accenture UK

Kay Firth-Butterfield

Kay Firth-Butterfield

As a barrister and part-time judge, Kay Firth-Butterfield has developed a unique specialty: battling injustice before it happens.

The British human rights lawyer leads the World Economic Forum’s Artificial Intelligence (AI) and Machine Learning programme with a focus on trying to make sure we avoid repeating mistakes of the past — even subconsciously — as we build the technology of the future. One of her key focuses is ensuring we don’t pass gender bias into the AI that increasingly makes decisions about credit worthiness to medical diagnoses.

"Where bias is built-in, it starts with the biases of the people who are coding the AI,” she says. “Microsoft has been doing some work to correct such biases. But to do so, you need to have a human being checking the biases, and that human being might have biases themselves.”

Researchers at Boston University and Microsoft combed through the kinds of datasets that are used like dictionaries by AI programmes. They corrected biased correlations, such as the suggestion that men are doctors while women are nurses, to a create a gender-bias-free data set. They are now working on a set free of racial bias.

"The other way that bias gets into the machine is, of course, because machine learning depends on data and all data is historic,” Kay says. “That means we are seeing prejudices that we as humans have exhibited coming through into the algorithm."

The rapid developments in AI have raised ethical questions around everything from bias to privacy, to its impact on employment. Kay’s current role, which she began in September 2017, is to help ensure that technology doesn’t leave humanity behind. Day-to-day, she looks at how business and government should adopt technology responsibly.

Battling injustice before it happens

“I have always been interested in the intersection of science, global politics and law, and how developments in technology might improve our lives and planet. But education is needed to demonstrate what the technology can actually do,” she says.

“If you’re thinking about gender, for example, a machine that can look at someone’s face and decide whether that person is gay or not, what are the ethics surrounding that?” Kay says. “Should you be using that tool? Should it have been created?”

For her, education is needed to demonstrate what the technology can actually do

She explored AI’s potential to promote equality

In her past work as co-founder of the Consortium for Law and Policy of Artificial Intelligence and Robotics at the University of Texas, she taught courses on law and emerging tech in which she explored AI’s potential to promote equality, particularly in correcting gender imbalances if handled right.

“We need to recruit a diverse group of people,” says Kay, noting AI so far has been developed mostly by men, “but not just women: race, age, persons from the developing world and so on. Diversity, generally, needs to be thought about.”

Recruiting a diverse group of people


HOW TO GET INTO THE FIELD OF ARTIFICIAL INTELLIGENCE

Be Determined
When you reach the top of your career path like I had done as a barrister, it’s actually quite difficult to see where you want to go next and what’s the right path for you. Moving from the Bar to academia and then following my passion for the legal and policy space of AI has probably been the most challenging thing. But I was determined to make a difference in a different way.

Want to Make a Difference
There are very few of us working in the artificial intelligence and ethics space. The reason we’re here is because we believe that AI can really help humanity. We need to break down social issues when it comes to technology — technology isn’t a solution to our social issues.

Think Ethically
I think the risk to one’s brand of using unethical artificial intelligence is just starting to dawn on people. It’s about making sure that decision makers, whether government, non-profits, businesses, whoever, understand the technology better so that they can think about putting in ethics advisory panels to consider big topics like bias within the technology. If you can’t say that your technology is inclusive, for example, then you have to think about whether you want to be using it.

Diversity is Key
I was one of a very few female barristers when I qualified for the bar. I am delighted that, now, women outnumber their male colleagues in legal professions and my male colleagues have been wonderfully ready to adapt to our presence. In the work I do presently, I do not feel that I have problems being taken seriously because of my gender, but sadly, our education system has not encouraged women to take up careers in this field (of AI), which has led to a preponderance of men creating AI.


Popular Tags

    More blogs on this topic

      Archive