Skip to main content Skip to Footer

LATEST THINKING


Model behavior. Nothing artificial.

How can financial services firms trust complex machine learning and AI models?

NEW TECH CAN HELP FIRMS
MODEL THEIR DATA

To tap into the value of data, companies are increasingly
turning to machine learning and artificial intelligence (AI).

 

As hardware, processing power and storage capacities have rocketed into the stratosphere, so has companies’ ability to solve complex, real-world business problems through the application of machine learning/AI techniques and algorithms.

In the financial services industry, however—one of the most data-rich industries in the world—companies have so far only begun to foray into the rich world of machine learning and AI.

This needs to change, according to a new report from Accenture, “Emerging Trends in the Validation of Machine Learning and Artificial Intelligence Models.”

New approaches and techniques offer much richer applications and use cases for machine learning and AI, including:

  • Risk modeling
  • Portfolio management
  • Algorithmic trading
  • Fraud and misconduct detection
  • Loan/insurance underwriting

Machine learning and artificial intelligence are poised to make a big impact in financial services.



DOWNLOAD THE FULL REPORT [PDF]

WHO DO YOU TRUST?

A key impediment to the adoption of machine learning and AI is how to trust a particular model or algorithm—a point consistently posed to banks by their regulators as well as their own internal control functions.



The ability to explain the conceptual soundness and accuracy of such techniques is a significant challenge, not only because the tools are so new, but also because there is an inevitable “black box” nature of some of the more powerful machine learning and AI approaches.

Two related but distinct challenges are involved here:


  • Challenges in model validation:
    Banks will remain subject to scrutiny focused on the development and execution of statistical and analytical tools following industry guidelines or their own standards and protocols.

  • Challenges in model risk management:
    The challenges of using machine learning and AI models extends to the larger discourse on model risk management. Management of this risk can be seen as an array of the source of model risk (data, estimation and usage) against the measurement (identification, quantification and mitigation).

WHAT’S DIFFERENT?

Compared to traditional model validation, machine learning and artificial
intelligence will differ in the areas of data, processes, methodology and
governance:


Data


  • Greater volume and less structure
    to data imply greater computational
    needs for integrity testing

Processes


  • More complexity and computational
    overhead in model execution
  • More complex algorithms and model
    development processes are harder
    to document

Methodology


  • Measures of fit or discrimination may
    have different interpretations than
    in traditional techniques
  • Greater emphasis on out-of-sample
    performance and stability metrics

Governance


  • Challenging to design policies and
    knowledgeable governance of more
    complex development,
    monitoring and use

Addressing these challenges with new validation techniques can help raise the level of confidence in model risk management. It can also raise the confidence of regulators in the accuracy and appropriateness of emerging machine learning and AI tools in areas such as credit risk and regulatory capital management, stress testing and trade surveillance.

For more, read our report, “Emerging Trends in the Validation of Machine Learning and Artificial Intelligence Models.”


DOWNLOAD THE FULL REPORT [PDF]

CONTACT THE AUTHORS

Samantha Regan

Samantha Regan
Global Lead, Regulatory
Remediation & Compliance Transformation


Luther Klein

Luther Klein
North America Lead,
Finance & Risk Analytics


Michael Jacobs Jr.

Michael Jacobs Jr.
Principal Director
Finance & Risk


Sitwat Kazmi

Sitwat Kazmi
Senior Manager
Finance & Risk


CONNECT WITH US