Although artificial intelligence (AI) brings financial services many new possibilities, it can also imply some issues and difficulties. An example of such an issue is the so-called 'black box AI', where service providers lose the ability to explain an algorithm’s output and insights. To remain competitive while preventing black boxes from emerging, it's a challenge to meet transparency requirements while innovating continuously to serve changing customer needs.

<<< Start >>>

<<< End >>>

This article is the second in a series of two, where we discuss the outcomes of the roundtable on the topic 'Black boxes and transparency within AI-based financial services'.

Transparency and explainable AI

In the first article of this series of two, we discussed the outcomes of the roundtable on the topic of applying AI to financial services to the benefit of the customer. As mentioned in this article, legal frameworks demand financial services providers to deal with the customers’ best interests and act responsibly following their obligations around the duty of care.



One of the key components of these requirements is ensuring transparency and being able to explain AI-based decisions. In a services landscape where AI-based services and automated advice are gaining territory, the need for transparency becomes increasingly important, especially within financial services.

But how do we determine whether a service is transparent? And what can we do to create transparency in the black boxes of AI? In this article, we will explain how AI-based financial services and transparency go hand-in-hand and create value for customers, businesses, and society.

<<< Start >>>

<<< End >>>

Do we want to know it all?

You have just bought a new car. To be allowed on the road, you open the mobile application of your insurer InsureCar to buy an instant car insurance policy. The application shows you a premium of €70 and you accept it without hesitation. Proudly, you show your car to your neighbor and tell him about the high-quality service of InsureCar.

Two months later, you see your neighbor enter his driveway with that exact same car. He states how much he liked your car and that buying it was a no-brainer when he discovered that he could get it for a good price at his car dealership of his father-in-law. Casually, your neighbor also thanks you for referring him to InsureCar and mentions that he closed an instant car insurance policy with a premium of €50 via InsureCar’s application. Thunder-struck, you go inside and call InsureCar and ask for an explanation for the difference in premium.



As it appears, InsureCar has just implemented an algorithm that automatically determines car insurance premia and gives an instant result without explaining how the premium was calculated. In your frustration with this lack of transparency, you decide to file a complaint. And while you're at it, you copy in the Netherlands Authority for the Financial Markets and the Privacy Authority as well, explaining that InsureCar is discriminating in their acceptance policies, and is not transparent about the automated decisions that are made toward you as a customer.

This is just an example of how AI-based financial services are offering excellent value. But compliance with transparency requirements remains a challenge. During the roundtable, we hosted a debate around the value of transparency of AI-based financial services and many interesting questions popped up. 

Is transparency something that customers really want and is it therefore a unique selling point for financial services providers? Or is it something that is a must from a duty of care and a compliance perspective? And from a business value perspective, how can we ensure transparency within increasingly complex algorithms and decisions to different stakeholders?

<<< Start >>>

<<< End >>>

Black box AI: a myth or a problem?

Algorithms and AI-based services are being perceived as complex and are generally associated with so-called black boxes. A black box is an AI or algorithm in which people are unable to track how it derived its output and, hence, don't know how certain decisions and insights are being produced.

The roundtable’s participants agreed that AI has a great potential to improve the lives of customers by offering more personalized and available financial services. Looking into industry research on the benefits for banks, analysts expect that AI will realize a cost-saving of $1 trillion by 2030 in the financial services industry, making it extremely relevant for you to start using the potential that AI can bring to your business.

An example of how AI is already contributing to financial services for customers and service providers is the application of machine learning to analyze transactions to detect fraud and avoid money laundering. From a customer’s point of view, pesky procedures are handled more rapidly. From a service provider’s point of view, costs are saved by freeing up analyst capacity and redeploy them to follow up the raised red flags. Even though this shows the great potential of AI for the banking industry, the concerns and discussion around black box AI remain.

<<< Start >>>

BY 2030, $1 TRILLION

  

in cost-savings can be achieved in the financial services industry [Source: The Financial Brand]

<<< End >>>

To exemplify this discussion, think about a bank using machine learning to analyze the approval of mortgage applications. When using, for instance, the applicants’ ethnicity into the algorithm, cases of discrimination are likely to occur. Even if the bank argues that this is not possible as ethnicity is not included as a variable, there are other variables that could (in)directly signify someone’s ethnicity. When ethnicity comes into play in assessing mortgage applications, the consequences and societaland legaldrawbacks would be significant.

Opening the black box: data and algorithms

Financial services providers shouldno matter whatbe able to explain the decisions they make, automated or not. The results of AI-based decisions can come from the two components that facilitate these decisions: data and algorithms.

First, data could be biased. Without the right mechanisms to tackle sample (the sample does not represent the population) and stereotype (unsuitable differentiation between gender or race, as exemplified in the paragraph above) biases, decisions will be made by the AI on biased data, which can lead to discriminatory outcomes. Mechanisms to prevent these types of bias could befrom a data quality perspectivematching and aligning data with the target segment, being open on the used data, and setting up review cycles with statistical and legal experts.



Second, algorithms should allow one to track its decision-making mechanism to ensure the elimination of biases and other questionable elements. Such tracking mechanisms are needed because you can't assume that all algorithms have incorporated ethical considerations and design principles.

When banks are unable to explain AI-based decisions to their stakeholders, their AI-based services can be considered a black box. But, do black boxes mean that AI and algorithms are the problem? Or is the problem merely a matter of absence of sufficient ethical guidelines for decision-makers and standards around data and algorithms, such as the 2019 Ethics Guidelines for Trustworthy AI from the European Commission? Moreover, how do customers perceive black boxes and transparency?

<<< Start >>>

<<< End >>>

Is transparency a fearful word?

Besides following guidelines and being compliant to regulations, the roundtable participants stated that economic value can be accrued from transparency. Contrasting points of view on how to leverage transparency to benefit stakeholders were demonstrated, though.

One line of thought, despite the acknowledgment for the need for transparency in financial services, is that proactively bringing transparency to customers is largely overvalued. Too much attention for transparency will stimulate people’s and society’s fear of AI, which in turn will hinder the adoption of AI-based services.

Will more emphasis on transparency increase the understanding and acceptance of AI-based services or will it lead to more axciety?

Another stream of thought seemed less skeptical about the link between transparency and the general fear for AI and stated that the emphasis on transparency will only increase the understanding and acceptance of AI-based services. Training and hiring good data scientists and establishing a proper knowledge level on AI internally will both remove the black box of AI and increase the understanding of AI-based services.

Also, the focus on transparency will help firms to increasingly apply and evangelize AI, which will help to create a new standard on how decisions are being made and explained.

Balance for the better

As black-box models are not desired by customers, regulators, and service providers, customers paradoxically don’t desire full transparency all the time. It's therefore essential for financial services providers to offer their customers essential insights on how algorithms guide decisions.



This article has centralized the importance of transparency and has shown that there are several ways on how firms can cope with the challenges regarding the black box of AI.

Seemingly, financial services providers recognize the need for transparency. The discussion during the roundtable, however, has shown that there are many different aspects to be considered while continuously balancing both the advantages and disadvantages of the proposed measures.

Financial services providers remain innovative in their position on managing transparency requirements. In acknowledging and supporting the need for transparency, they have, on one hand, found different balances in meeting transparency and compliance requirements, but on the other hand, maintained an innovative focus to meet changing customer needs.

<<< Start >>>

<<< End >>>

Where decisions are increasingly getting automated, it's important to create a balance between eliminating black boxes by meeting transparency requirements and understanding how customers perceive transparency.

As a financial services provider, you need to have a clear understanding of how, where, and when to serve the needs of your stakeholders in the best possible manner when it comes to transparency in AI-based financial services.

By finding the right balance between transparency and customer needs in automated service portfolios, financial services remain innovative and of high quality, while black boxes will once and for all be nothing more than a modern myth.

Axel Haenen

Technology Consultant – Accenture Strategy & Consulting, Financial Services


Julia Jessen

Management Consulting Senior Manager – Financial Services, specialized in Compliance and Regtech

Subscription Center
Subscribe to Accenture Insights Subscribe to Accenture Insights