It can be easy to forget just how prevalent facial recognition algorithms have already become. From tagging friends in social media, to searching Google images, to unlocking our phones, they’re an ever-more deeply embedded part of everyday living. But as the technology continues to explode, can we be sure it’s working for everyone?
Simple answer: no. In fact, we’re at risk of letting unintended ethnic and gender biases in our computer vision technologies shut out huge swathes of our communities.
Consider that many offices in the US now use facial biometrics to give access to meeting rooms and other secure spaces. When it works, it’s a great innovation, bringing security and convenience to the workplace. But for the women and ethnic minorities whose faces don’t fit the algorithms – and are locked out of their own office spaces – it’s anything but.
Or consider that a Google search for “CEO” will return a series of images of white men. This kind of everyday bias can have profoundly damaging consequences. For younger generations in particular, for whom digital platforms represent a core part of their understanding of the world, there’s a serious risk that these subtle messages start influencing, or even dictating, expectations.
Of course, no responsible organisation deliberately creates a biased AI. But it can happen unintentionally. Many of the datasets used to train facial recognition algorithms predate the machine learning era and are significantly skewed towards white male faces. In fact, the most popular computer vision training set is 78% male and 83% white. That bias comes through into the resulting AI, meaning female and non-white faces are often excluded. And because developers keep sharing the same datasets, the problem persists.
So how can we change the story and eliminate the bias from facial recognition? Inspired by the fantastic work of Joy Buolamwini and the Algorithmic Justice League, we’ve been developing a new crowdsourced concept – called fairvision – which we think can bring about real change. The core idea: citizens can allow developers to train their algorithms by using their photos and metadata through a trusted platform. And enable the ecosystem of developers to use a high quality and diverse training dataset for computer vision algorithms.
In return for their data, citizens would get clear visibility into and control over what developers planned to achieve by using it. And – importantly – they’d get feedback on exactly how their contribution reduced bias in the resulting algorithms. This is the kind of active and engaged role for end users that has been sorely missing to date.
For developers, the platform would offer access to the diverse data essential for training bias-free algorithms – free of cost. That democratised, non-profit element would be essential. Unbiased data mustn’t be the preserve of big corporates or research organisations. The new breed of entrepreneurs building the next Amazon or the next Google must have easy access too.
Regulators can also benefit. There’s a clear need for new forms of digital-ready legislation that can keep pace with accelerating technology. With greater visibility into algorithmic bias from a platform like fairvision, they can craft more agile approaches to recurring problems. And perhaps even experiment with regulatory changes in new sandbox environments.
Ultimately, fairvision is about standardising good practice and responsibility. That means bringing everything out into the open and fostering a sense of trust and collaboration between all parts of the AI ecosystem. In the end, it’s about ensuring we don’t just build AI for good – we build good AI for everyone.
We’ll be explaining more about our plans for fairvision at our CogX pavilion – come and join the conversation and help us beat the bias in facial recognition for good.