Skip to main content Skip to Footer

BLOG


July 28, 2017
Why explainable AI must be central to responsible AI
By: Deborah Santiago & Teresa Escrig

Imagine that you are buying your very first home. You have a good stable job and carefully submit all your paperwork for a mortgage. Unfortunately, the bank rejects your mortgage application. You ask the banker why you were rejected and, with a blank stare, he simply shrugs, indicating he does not know why you were rejected – only that you have been rejected. Would you find this response satisfactory?

What if the bank consistently rejects applications from people of color, women or people with disabilities without any explanation?1  Would you return as a customer?

Inscrutable Inscrutability

One of the things that horrifies people about Artificial Intelligence is the idea that our world will be filled with AI “black boxes” that will make important decisions without accountability or recourse. A popular book on this topic is Weapons of Math Destruction. In her book, Cathy O’Neill argues that these AI black boxes are scary when they:

  1. make important decisions in areas that impact people personally – like in employment, lending, education, housing, etc.

  2. are secretive, thus lacking in accountability and

  3. are destructive.

As a result, there has been a series of calls lately to regulate algorithms or AI (e.g., Elon Musk at the National Governors Association meeting; Cathy O’Neill’s Ted Talk and here). At the same time, given that some forms of AI involve AI teaching/programming itself, AI by its “nature” is inscrutable.

But will humans accept a future in which AI is not held accountable to explain itself?

Explainable AI as the foundation to establishing trust

Recently DARPA awarded a grant of $6.5M to a group of academics exploring the area of Explainable AI, which they define as systems with the ability to:

  • Explain their rationale

  • Characterize the strengths and weaknesses of their decision-making process and…

  • Convey an understanding of how they will behave in the future.

At Accenture we believe that a Responsible AI deployment should incorporate the concept of “Explainable AI” if a company aims for its AI to be honest, fair, transparent, accountable and human-centric and that ongoing investments from the public and private sectors are essential in order to make Explainable AI a reality now. The principles guiding Explainable AI efforts enable AI to flourish and we anticipate increased regulatory and legislative activity in this space in the months to come.

The General Data Protection Regulation (GDPR) and the Right to Explanation

For example, an area that might signal thinking in regulatory spheres and should be on any organization’s radar is the GDPR. Requiring compliance by May 2018, the GDPR is a new data privacy regulation impacting organizations handling data of EU individuals, whether or not those organizations are physically located in the EU. In this regard and also due to the staggering fines that could be assessed against a non-compliant organization, the GDPR is considered one of the most significant regulations impacting companies today.

There is some debate as to whether the GDPR effectively creates a “right to explanation” (see also here). Just as it sounds, under this possible “right to explanation”, a consumer could request for an explanation as to how the algorithm made the decision that it did (if no human was involved in the decisionmaking process).

Say No to the Indifferent AI shrug

Whether or not the GDPR has a right to explanation, we believe that companies will be at a competitive advantage if they embrace Explainable AI in order to future-proof their AI systems from a regulatory point of view.  This is because the reality is that consumers do not want to be faced with an indifferent “AI shrug” and will demand explanations, seek recourse or vote with their feet.  Going back to the mortgage rejection example, if we would never accept a shrug as a form of explanation from a human banker, why would we accept it from AI?

AI bias merely reflects the same biases that humans have – thus it is key to establish strong governance that embeds accountability into the system.  To incorporate Explainable AI into an overall AI strategy, we recommend that executives take the following initial next steps:

  1. Inventory. What types of decisions do we undertake or propose to be undertaken by AI that would require explanation or create an expectation of explanation? Are any of them in the areas of employment, recruitment, lending, education, healthcare, housing, safety, etc.?  Are any of them that will indirectly (or in aggregate with other decisions) impact these areas?
  2. Assess.  What existing quantitative and qualitative models are being utilized today to provide explanations – and what is their efficacy vis-à-vis the intended users?
  3. Design.  What design principles should we incorporate in order to make our AI understandable and thus more human-centered?
  4. Audit.  What process do we have to ensure that the data sets we use are not static but reflect the evolving nature of our workplace?

While these are just the initial steps, companies take stock now to begin their Explainable AI journey. Please join us for future postings where we will tackle why design and understandable AI matters, comparing AI and natural intelligence (and why this matters), challenges around existing models and why they won’t achieve explainability and more.


Why responsible #ArtificialIntelligence must incorporate "Explainable #AI" to establish foundation of trust:
 
 



1 In the United States, we have had an unfortunate history of “redlining”, in which some banks rejected mortgage and business loan applications on the basis that the applications were coming from “high risk” communities which also happened to be communities of color. Without going too much into a US history lesson in this footnote, a series of legislative and regulatory actions in the 1960s, 70s and 80s were passed in order to combat overt redlining practices with the aim of holding banks accountable and reestablishing trust – but ongoing studies continue to assess rightfully whether less overt forms of redlining practices remain today.

Popular Tags

    More blogs on this topic

      Archive