What is Explainable AI?
Explainable AI (XAI) is defined as systems with the ability to explain their rationale for decisions, characterize the strengths and weaknesses of their decision-making process, and convey an understanding of how they will behave in the future.
NVIDIA is one company that is tackling the black box issue head-on. Drive PX, NVIDIA's AI-infused self-driving car platform, was designed to "teach" itself to drive but the way it did so was not entirely clear. To improve the system, NVIDIA engineers prioritized opening the AI black box and developed a way to get a Drive PX vehicle to explain its driving style visually. The platform does so by displaying a video of a recently driven streetscape, over which it then highlights areas that it gave the most weight to during navigation.
An open issue for XAI is who are the explanations for—advanced mathematicians or engineers, or employees and customers? Also, much of the AI employed today automates more traditional statistical methods, which are more easily explained than neural-net-based decisions used for image recognition or self-driving cars. Accenture is forecasting the emergence of new "explainer" roles to help fill this void, but work remains to make AI models more readily interpretable.
There are two notable efforts to create XAI: DARPA-XAI and LIME (Local Interpretable Model-Agnostic Explanations). US Department of Defense's Defense Advanced Research Projects Agency (DARPA) launched the Explainable Artificial Intelligence (XAI) program to identify approaches that will give AI systems "…the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future." LIME is a technique developed at the University of Washington that helps explains predictions in an "interpretable and faithful manner."