Drawing on our experts in emerging technology, business application and design, we worked with AIB to understand and overcome the real-world challenges of how to implement AI responsibly, accurately, effectively and at scale.
Our R&D work in applied research labs means we can co-innovate with clients to bring academic research to bear on real-life commercial use cases at speed.
Working together, the Accenture and AIB team’s driving force was to apply newly emerging methods to assess models for algorithmic fairness in the banking industry. Building on research conducted by the Accenture Responsible AI practice in collaboration with the Alan Turing Institute which took place over the course of a weeklong hackathon, we took a multi-disciplinary approach to build and test a tool that enables this application.
The algorithmic fairness tool can be used by data-science and business users on real problems – taking this thinking from a proof of concept into the real world. It is integrated with existing in-market data-science tools, with an analysis for fairness added as a step to the current workflow.
Analyses can be pushed to a repository for business users, who can then share the results with broader stakeholders to help inform their decision-making.
During our work, we validated the tool with AIB, and assessed fairness and actions needed to mitigate bias in two new models that were in development. At relevant points in the current model development workflow, analyses were surfaced from the tool to a multidisciplinary group. They then directed further investigation into areas of potential bias. This methodology informed and improved decision-making during the model-build process.
Ultimately, the algorithmic fairness tool gives AIB a deeper understanding of their data and model outcomes from a fairness perspective. This strengthens their ability to mitigate for bias during model development and reinforces their confidence in the fairness of their final models.