Fairness you can bank on
Applying algorithmic fairness to the real world of retail banking
Data can be biased by bad information or human bias when it is collated, and if this data is processed by AI, it can generate unfair outcomes or decisions.
We’ve already seen many high-profile examples of this, in areas as diverse as criminal justice, the financial sector and facial recognition. This issue could impact any business using AI, and governments are now responding: specific regulation is imminent in several jurisdictions.
Many approaches to algorithmic bias are still in a nascent stage, with existing quantitative methods to assess and mitigate bias in predictive models largely rooted in academia, where they are being applied to theoretical problems.
As Ireland’s largest bank, Allied Irish Banks (AIB) wanted to understand how these methods could be applied to real-world retail-banking scenarios.
They wanted to ensure they were ahead of the industry and setting industry benchmarks for how to use data, all while acting in a manner true to their brand values of putting their customers first and building trust and appreciation.
Teaming up with Accenture, AIB aimed to get their data-science teams rapidly up to speed on the latest developments in the space, and further enhance the integration of algorithmic-fairness assessment in the models they use to aid their decision-making.
"Working with Accenture has really enabled us to understand and implement cutting-edge methods to measure algorithmic fairness. In today's environment, trust is of ever-growing importance to us, our customers and society at large - these techniques are an important support in ensuring we maintain and enhance this trust."
Drawing on our experts in emerging technology, business application and design, we worked with AIB to understand and overcome the real-world challenges of how to implement AI responsibly, accurately, effectively and at scale.
Our R&D work in applied research labs means we can co-innovate with clients to bring academic research to bear on real-life commercial use cases at speed.
Working together, the Accenture and AIB team’s driving force was to apply newly emerging methods to assess models for algorithmic fairness in the banking industry. Building on research conducted by the Accenture Responsible AI practice in collaboration with the Alan Turing Institute which took place over the course of a weeklong hackathon, we took a multi-disciplinary approach to build and test a tool that enables this application.
The algorithmic fairness tool can be used by data-science and business users on real problems – taking this thinking from a proof of concept into the real world. It is integrated with existing in-market data-science tools, with an analysis for fairness added as a step to the current workflow.
Analyses can be pushed to a repository for business users, who can then share the results with broader stakeholders to help inform their decision-making.
During our work, we validated the tool with AIB, and assessed fairness and actions needed to mitigate bias in two new models that were in development. At relevant points in the current model development workflow, analyses were surfaced from the tool to a multidisciplinary group. They then directed further investigation into areas of potential bias. This methodology informed and improved decision-making during the model-build process.
Ultimately, the algorithmic fairness tool gives AIB a deeper understanding of their data and model outcomes from a fairness perspective. This strengthens their ability to mitigate for bias during model development and reinforces their confidence in the fairness of their final models.
It took a unique combination of people and skillsets to develop and validate this solution, all driven by the complexity of the challenge and the potential of the new tool.
This involved analytics experts working with leading academic research; designers working hand-in-glove with data scientists; and business and compliance experts collaborating to determine how to practically embed fairness in modelling processes.
We worked in a truly collaborative way with AIB from development to implementation. This involved innovation sessions at The Dock, Accenture’s flagship R&D and global innovation centre, as the tool was developed, along with four months of work and training on-site with the bank. This ensured we fully understood the challenge and real-world complexities and could validate and refine the tool that would integrate seamlessly with their existing systems.
Together, we’ve also assessed the potential intended and unintended consequences of applying the fairness tool across AIB and determined a high-level roadmap to explore these.
This project was driven by the combined efforts of teams in AIB and Accenture who together want to address this challenge for their customers and for society. And getting out ahead of the curve shows that AIB has a culture committed to ensuring their customers are continuing to get the fairest decisions in the most efficient way possible.
Conscientious innovation is at the heart of Accenture’s evolving mission to improve the way the world works and lives, and the Algorithmic Fairness tool is an example of this in action.
We have given AIB a new approach. AIB’s modelling teams are now self-sufficient on the tool and are able to independently use it in their ongoing work.
With this tool, we enabled AIB to integrate a data-driven assessment of the complex problem of algorithmic fairness in the end-to-end model lifecycle, breaking it down into manageable, understandable ‘chunks’ for data scientists and business executives. The tool also supports the mitigation of risks as a result of AI and general predictive modelling – which is one of the most pressing issues facing organisations working with large volumes of data.
In addition, the algorithmic fairness tool has empowered AIB’s data science teams to quickly apply emerging techniques to assess for bias.
This work has enabled AIB to affirm confidence and a deeper understanding of their models, the fairness domain and the new governance models and tools needed in this area to continue to deliver fair, trustworthy banking for their customers. It positions them as an industry leader in this space and helps them to anticipate and prepare for upcoming regulatory change.
Enabled AIB to integrate a data driven assessment of the complex problem of algorithmic fairness in the end-to-end model lifecycle.
Empowered AIB to continue to deliver fair, trustworthy banking for their customers.
Positioned Allied Irish Banks as an industry leader and helped them prepare for upcoming regulatory change.