Imagine that you are buying your very first home. You have a good stable job and carefully submit all your paperwork for a mortgage. Unfortunately, the bank rejects your mortgage application. You ask the banker why you were rejected and, with a blank stare, he simply shrugs, indicating he does not know why you were rejected – only that you have been rejected. Would you find this response satisfactory?
What if the bank consistently rejects applications from people of color, women or people with disabilities without any explanation?1 Would you return as a customer?
One of the things that horrifies people about Artificial Intelligence is the idea that our world will be filled with AI “black boxes” that will make important decisions without accountability or recourse. A popular book on this topic is Weapons of Math Destruction. In her book, Cathy O’Neill argues that these AI black boxes are scary when they:
make important decisions in areas that impact people personally – like in employment, lending, education, housing, etc.
are secretive, thus lacking in accountability and
As a result, there has been a series of calls lately to regulate algorithms or AI (e.g., Elon Musk at the National Governors Association meeting; Cathy O’Neill’s Ted Talk and here). At the same time, given that some forms of AI involve AI teaching/programming itself, AI by its “nature” is inscrutable.
But will humans accept a future in which AI is not held accountable to explain itself?
Explainable AI as the foundation to establishing trust
Recently DARPA awarded a grant of $6.5M to a group of academics exploring the area of Explainable AI, which they define as systems with the ability to:
Explain their rationale
Characterize the strengths and weaknesses of their decision-making process and…
Convey an understanding of how they will behave in the future.
At Accenture we believe that a Responsible AI deployment should incorporate the concept of “Explainable AI” if a company aims for its AI to be honest, fair, transparent, accountable and human-centric and that ongoing investments from the public and private sectors are essential in order to make Explainable AI a reality now. The principles guiding Explainable AI efforts enable AI to flourish and we anticipate increased regulatory and legislative activity in this space in the months to come.
The General Data Protection Regulation (GDPR) and the Right to Explanation
For example, an area that might signal thinking in regulatory spheres and should be on any organization’s radar is the GDPR. Requiring compliance by May 2018, the GDPR is a new data privacy regulation impacting organizations handling data of EU individuals, whether or not those organizations are physically located in the EU. In this regard and also due to the staggering fines that could be assessed against a non-compliant organization, the GDPR is considered one of the most significant regulations impacting companies today.
There is some debate as to whether the GDPR effectively creates a “right to explanation” (see also here). Just as it sounds, under this possible “right to explanation”, a consumer could request for an explanation as to how the algorithm made the decision that it did (if no human was involved in the decisionmaking process).
Say No to the Indifferent AI shrug
Whether or not the GDPR has a right to explanation, we believe that companies will be at a competitive advantage if they embrace Explainable AI in order to future-proof their AI systems from a regulatory point of view. This is because the reality is that consumers do not want to be faced with an indifferent “AI shrug” and will demand explanations, seek recourse or vote with their feet. Going back to the mortgage rejection example, if we would never accept a shrug as a form of explanation from a human banker, why would we accept it from AI?
AI bias merely reflects the same biases that humans have – thus it is key to establish strong governance that embeds accountability into the system. To incorporate Explainable AI into an overall AI strategy, we recommend that executives take the following initial next steps:
While these are just the initial steps, companies take stock now to begin their Explainable AI journey. Please join us for future postings where we will tackle why design and understandable AI matters, comparing AI and natural intelligence (and why this matters), challenges around existing models and why they won’t achieve explainability and more.