Explainable AI
September 21, 2018
September 21, 2018
As with all innovation, new opportunities with AI don’t come without risk.
It helps to begin any AI journey with a clear view of the possible risks in four key areas:
To find the answers, a business must take a responsible approach to AI by placing humans at the center of its thinking. It can mitigate the risks that AI brings by adopting four imperatives:
Accenture's video discusses responsible AI that addresses the gap in making the ai systems more transparent, reliable, and interpretable. See more.
View TranscriptWhat’s the one thing that will, above all, help build public trust in AI? Explainability. In other words, being ready to explain how AI came to a decision or solution. Explainability is already a compliance challenge in regulated industries like financial services. But there’s a broader consideration too: humans are inherently more likely to trust something they understand.
It’s not always easy to achieve. By their very nature, machine learning algorithms and deep learning neural networks can be “black boxes,” making it very hard to explain how they come to their decisions. But explainable AI solutions are now emerging. And they’re going to be a vital part of using the technology in the years to come.