Using algorithms for decision-making offers huge benefits. From reducing drug discovery times, to speeding up and increasing the accuracy of medical diagnoses, to helping to feed the world by boosting agricultural productivity, algorithms contribute to progress in many different spaces.

And using an algorithm to make a decision takes emotion out of the equation, giving a more fair, unbiased decision – right? Well, maybe not. In practice, we’ve seen this isn’t always the case.

Résumé-scanning algorithms have overlooked female candidates for technical positions. Medical imaging diagnostic systems have failed for dark-skinned patients. Criminal sentencing algorithms have discriminated against black defendants. The list goes on.

There are many reasons algorithms can make biased decisions. Even a perfectly “fair” algorithm can lead to biased decisions if it’s used incorrectly in a real-world environment. In that case, it’s a lack of transparency around the way the system is used that often causes problems. But I want to focus on here on bias that’s encoded during development – for example, if the data used to train an algorithm in the first place is biased.

Without concerted effort in the model development process, bias can be encoded into all of the decisions that the resulting algorithm makes. Humans make biased decisions all of the time, of course. But automated algorithmic decision-making can be scaled to impact huge populations at the click of a button. That makes the risk of encoding bias into an algorithm much worse. And with algorithms being used to make more decisions in more aspects of our daily lives, we need to identify, quantify and address potential bias at all stages of the model life cycle. 

I’ve been leading a key Accenture Labs effort in this space in collaboration with our global Responsible AI practice and the Dock, our global innovation center. We’ve developed tools that let data scientists quantitatively assess for fairness across the end-to-end model life cycle. This includes state-of-the-art black-box AI models. Many of the algorithms developed before it became the norm to check for bias fall into this category. Those models are already in use and making critical decisions today.

Accenture’s Responsible AI practice conducted a week-long hackathon with the Alan Turing Institute in 2018, aimed at translating existing qualitative definitions of fairness into quantitative measures. Building on those efforts, we developed a proof of concept for an algorithmic fairness tool – one that presents its results in a way that is understandable to both data scientists and business users. We tested and refined the tool in a pilot with industry experts at a financial services company.

The algorithmic fairness tool works on real-world problems, taking this academic thinking from a proof of concept into everyday life. Although we initially applied it to financial services, we are now applying it to a number of industry use cases including health, public service and HR. 

To make the tool relevant for real-world problems, we had to reexamine a number of assumptions. Many assumptions make sense when using small, clean datasets and straightforward scenarios, but don’t work for complex data in real-life settings. Technical solutions can address many of these, alongside governance and standards to guide the data scientists and business users. Interpreting the outcomes of fairness metrics isn’t always straightforward – we almost always need a deep dive to determine the root cause of potential bias. 

Working on the technical side and having the right tools, governance and processes to assess and address fairness is both important and fascinating to me. But just as important to address bias is tackling the lack of diversity in the artificial intelligence field.

Women and ethnic minorities remain underrepresented in computer science both in university and in the workforce. This lack of diversity leads to one of the most difficult-to-combat types of bias: unconscious bias. Diversity among experts helps ensure that the potential impacts (and in this case, biases) of a technology are considered from multiple perspectives from the start. Without that diversity, the chance for bias increases sharply.

And this kind of bias can lead to life or death situations. For example, we’ve seen instances of facial recognition systems not recognizing black women as humans due to training data being mainly from white males. This would have serious implications in a self-driving car looking to determine if there is anyone in its path. Unfortunately this isn’t a new problem in tech: 20 years ago, due to crash test dummies being based on mainly on the adult male body, women (and children) were much more likely to die in car crashes.

This long-standing problem is much harder to address than the technical one. It takes concerted effort over time from a wide range of stakeholders across society, including industry, government, and educators.  But we can and must address it, and we need efforts on multiple fronts.

Last year I co-sponsored the Accenture’s six week Women in Data Sci Accelerator in Dublin, which is now an annual event being scaled across other countries. The goal of the Accelerator is to empower women already in careers working with data with the skills required to be a data scientist.

There is also a longer-term effort of working with the younger generation. Only about a quarter of computer science undergraduates are female. Unless we address this imbalance, it will be very difficult to address the continued imbalance in the workforce. So, I focus most of my attention in getting more girls and young women interested in the options out there for a career in STEM.

I work with third level institutions to shape new data science and AI courses. I also help them think about how to make these courses, and the way they are advertised, more attractive to females – sometimes by tackling the already ingrained unconscious bias that often already exists at this stage.

I’ve developed efforts for second level students as well. Initiatives that I’ve rolled out here include a girls-only work experience week to learn about STEM, and also sponsoring Junior Math Achievement weekend classes for about 100 11-15 year-olds, aimed at giving children who are good at mathematics a place to excel and have fun at the subject. The sponsorship is contingent on ensuring good representation across both males and females and also across disadvantaged and privileged schools.

And finally, because all the research shows “You can’t be what you can’t see,” I make myself get out there and speak on these topics (getting myself out of my comfort zone!) to give younger women a role model.

Fairness in algorithmic decision making is of growing importance. But making sure that it becomes the norm depends on both technical and societal solutions. The technical piece has to be addressed by the people already in place with expertise, but the second part – ensuring diversity in those groups of experts going forward – is something we all have an obligation, and an opportunity to make happen.

The efforts I’ve been part of to date do make a difference, but we can and must do more. By making sure that younger generations of all backgrounds, ethnicities and gender get the opportunities and feel empowered to pursue careers in STEM and specifically AI, we can help apply this technology to its full potential for everyone.

To learn more about our efforts in algorithmic fairness or get involved with our efforts to increase diversity in STEM and AI research, contact Medb Corcoran.

Medb Corcoran

Managing Director at Accenture Labs, The Dock

Recent posts

Subscription Center
Subscribe to Accenture's Technology Innovation Blog Subscribe to Accenture's Technology Innovation Blog