The AI stakes are getting higher
Some AI-based services and tasks today are relatively trivial – such as a song recommendation on a streaming music platform.
However, AI is playing an expanding role in other areas with far greater human impact. Imagine you’re a doctor using AI-enabled sensors to examine a patient, and the system comes up with a diagnosis demanding urgent invasive treatment.
In situations such as this, an AI-driven decision on its own is not enough. We also need to know the reasons and rationale behind it. In other words, the AI has to “explain” itself, by opening up its reasoning to human scrutiny.
The transition to Explainable AI is already underway, and within three years, we expect it to dominate the AI landscape for businesses.
Accenture Labs, in a new report, details how we can meet the need for more information by giving AI applications the ability to explain to humans not just what decisions they made, but also why they made them.
Explainable AI, ready for takeoff
The transition to Explainable AI is already underway, and within three years, we expect it to dominate the AI landscape for businesses. It will empower humans to take corrective actions, if needed, based on the explanations machines give them. But how will it do this?
There are three ways of manifesting and conveying the reasoning behind AI decisions made by machines:
Two use cases for Explainable AI
No. 1 – Detecting abnormal travel expenses
Most existing systems for reporting travel expenses apply pre-defined views, such as time period, service or employee group. While these systems aim to detect abnormal expenses systematically, they usually fail to explain why the claims singled out are judged to be abnormal.
To address this lack of visibility into the context of abnormal travel expense claims, Accenture Labs designed and built a travel expenses system incorporating Explainable AI. By combining knowledge graph and machine learning technologies, the system delivers insight to explain any abnormal claims in real-time.
No. 2 – Project risk management
Most large companies manage hundreds, if not thousands, of projects every year across multiple vendors, clients and partners. A company’s expectations are often out of line with the original estimates because of the complexity and risks inherent in the critical contracts.
This means decision-makers need systems that not only predict the risk tier of each contract or project, but also give them an actionable explanation of these predictions. To address the challenges, Accenture Labs applied Explainable AI and developed a five-stage process to explain the risk tier of projects and contracts.
Eight measures can be applied to assess its value and effectiveness. These measures capture the elements that people need in an explanation, but cannot necessarily all be achieved. While explainable AI will use and expose techniques that address these questions,
we—as humans—should still expect a trade-off between value and effectiveness.
A technology revolution with people at its heart
Explanation is fundamental to human reasoning, guiding our actions, influencing our interactions with others and driving efforts to expand our knowledge. AI promises to help us identify dangerous industrial sites, warn us of impending machine failures, recommend medical treatments, and take countless other decisions.
The promise of these systems won’t be realized unless we understand, trust and act on the recommendations they make. To make this possible, high-quality explanations are essential. Read the full report for more on how Explainable AI will help define the future of work.