RESEARCH REPORT

In brief

In brief

  • Many artificial intelligence applications today are effectively “black boxes” lacking the ability to “explain” the reasoning behind their decisions.
  • As AI expands into areas with large impact on people, such as health care, it will be critical to subject the technology to greater human scrutiny.
  • Explainable AI won’t replace human workers; rather, it will complement and support people, so they can make better, faster, more accurate decisions.
  • Use cases for Explainable AI include detecting abnormal travel expenses and assessing driving style, based on Accenture Labs research.


The AI stakes are getting higher

Some AI-based services and tasks today are relatively trivial – such as a song recommendation on a streaming music platform.

However, AI is playing an expanding role in other areas with far greater human impact. Imagine you’re a doctor using AI-enabled sensors to examine a patient, and the system comes up with a diagnosis demanding urgent invasive treatment.

In situations such as this, an AI-driven decision on its own is not enough. We also need to know the reasons and rationale behind it. In other words, the AI has to “explain” itself, by opening up its reasoning to human scrutiny.

The transition to Explainable AI is already underway, and within three years, we expect it to dominate the AI landscape for businesses.

Accenture Labs, in a new report, details how we can meet the need for more information by giving AI applications the ability to explain to humans not just what decisions they made, but also why they made them.

Explain

Their rationale; the reasoning, whenever needed;

Characterize

Their strengths and weaknesses

Compare

With other AI systems

Convey

An understanding of how they will behave in the future

Make

The enterprise scalable through intelligent decisions; decisions smarter by augmenting humans with machines.

View All
As humans and machines work together even more, effective explanations will be at the very heart of this collaboration.

Explainable AI, ready for takeoff

The transition to Explainable AI is already underway, and within three years, we expect it to dominate the AI landscape for businesses. It will empower humans to take corrective actions, if needed, based on the explanations machines give them. But how will it do this?

There are three ways of manifesting and conveying the reasoning behind AI decisions made by machines:

1

Using data from the machine learning - using comparisons with other examples to justify the decisions

2

Using the model itself - explanations mimic the learning model by abstracting it through rules or combining it with semantics

3

Hybrid approach combining both data and model - offers metadata and feature-level explanations.

"The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust and understanding."

– FREDDY LECUE, Explainable AI Research Lead, Accenture Labs

Two use cases for Explainable AI

No. 1 – Detecting abnormal travel expenses
Most existing systems for reporting travel expenses apply pre-defined views, such as time period, service or employee group. While these systems aim to detect abnormal expenses systematically, they usually fail to explain why the claims singled out are judged to be abnormal.

To address this lack of visibility into the context of abnormal travel expense claims, Accenture Labs designed and built a travel expenses system incorporating Explainable AI. By combining knowledge graph and machine learning technologies, the system delivers insight to explain any abnormal claims in real-time.

No. 2 – Project risk management
Most large companies manage hundreds, if not thousands, of projects every year across multiple vendors, clients and partners. A company’s expectations are often out of line with the original estimates because of the complexity and risks inherent in the critical contracts.

This means decision-makers need systems that not only predict the risk tier of each contract or project, but also give them an actionable explanation of these predictions. To address the challenges, Accenture Labs applied Explainable AI and developed a five-stage process to explain the risk tier of projects and contracts.



Measuring effectiveness

Eight measures can be applied to assess its value and effectiveness. These measures capture the elements that people need in an explanation, but cannot necessarily all be achieved. While explainable AI will use and expose techniques that address these questions, we—as humans—should still expect a trade-off between value and effectiveness.

Comprehensibility

How much effort is needed for a human to interpret it?

Succinctness

How concise is it?

Actionability

How actionable is the explanation? What can we do with it?

Reusability

Could it be interpreted/reused by another AI system?

Accuracy

How accurate is the explanation?

Completeness

Does the “explanation” explain the decision completely, or only partially?

View All

A technology revolution with people at its heart

Explanation is fundamental to human reasoning, guiding our actions, influencing our interactions with others and driving efforts to expand our knowledge. AI promises to help us identify dangerous industrial sites, warn us of impending machine failures, recommend medical treatments, and take countless other decisions.

The promise of these systems won’t be realized unless we understand, trust and act on the recommendations they make. To make this possible, high-quality explanations are essential. Read the full report for more on how Explainable AI will help define the future of work.

Subscription Center
Stay in the Know with Our Newsletter Stay in the Know with Our Newsletter