From underwriting to healthcare, AI-driven automatic decision-making systems are taking over high-stakes tasks. It’s happening for good reason; machine learning and deep learning bring excellent predictive power and can easily tackle tasks that are often arduous to perform manually. But there’s also a challenge to implementing these technologies for real-world use. Their inner workings are both complex and cryptic, leading to the term “black box”—it’s hard to explain how these systems reach their predictions and decisions.

Imagine having an application for a loan rejected by an algorithm, with no explanation given as to why. Or having an insurance claim automatically denied without receiving any details. Or being denied parole with no understanding of which factors are driving the internals of the automatic decision-making process.

Being able to understand and interpret the outcome of AI models is essential to their future use: if people can’t trust the outputs of these models, the models will be rejected. Transparency and explainability are two key ways to build this trust. Explainability is also key for legal compliance—as in finance, where lenders are often required to tell applicants why they are being refused for a loan—and is crucial to debug AI systems.

There are two avenues to support the goal of explainable artificial intelligence. In one approach, researchers and engineers design transparent systems that strike a trade-off between predictive power and interpretability. In another approach, they design explanation sub-systems that help interpret the outputs of already-created "black box" models that are not transparent by design, such as neural networks.

At Accenture Labs, we are working in the latter space with a promising recent approach known as Counterfactual Explanations. Counterfactual Explanations provide the minimal changes required on the input data in order to obtain a different result. For example, a counterfactual explanation for a rejected loan application would tell the user what changes they would have to make to the “inputs” of the application, like their income, assets, and so on, in order to have the application approved.

This approach ultimately explains why an algorithm has made a particular decision, by showing one or multiple ways in which the input data could be modified to obtain an alternate decision. Counterfactual Explanations guarantee that any recommended changes to the input data are the smallest possible changes needed to reach a different decision.

A counterfactual explanation of why a black box algorithm denied a loan could look something like this:

You were denied a loan because your annual income is $30,000 and your current balance on hand is $200. If your income had instead been $45,000 and your current balance had been $400 and all other values remained constant, your application would have been approved.

A system developed at Accenture Labs offers recommendations to change the outcome of a rejected loan application. It’s interactive, so that users can ask the system to ignore inputs they can’t change (in this example, Annual Income) and give a new recommendation based on other factors.

To guarantee explanations that people can in fact use to take action, our system is interactive: users can exclude specific features from the generation of the explanations, or they can define “validity ranges” that the system must follow in creating its explanations. Using the example above, the user may not be able to increase his income as suggested. If that’s the case, he can constrain the base salary feature to limit the amount of the suggested increase, or entirely exclude it. The system would then create a new closest possible explanation without taking annual income into account; it might instead give the new higher minimum balance required in the user’s bank account to have the loan approved.

Counterfactual Explanations provide direct, easy to understand, and actionable explanations about the decisions made by algorithms, without requiring an understanding of the internal logic of the AI system that made the decision. They can help developers build robust models, and also be deployed as a drop-in enhancement to legacy machine learning pipelines. It’s one key way Labs is working to deliver explainability in AI.

Counterfactual Explanations application has been investigated by Accenture Labs at the Dock by Rory McGrath and Luca Costabello. See our paper on arxiv.org, and contact Luca Costabello or Medb Corcoran for more information.

Luca Costabello

Research Scientist


Rory McGrath

Research Engineer

Subscription Center
Subscribe to Accenture's Technology Innovation Blog Subscribe to Accenture's Technology Innovation Blog