Global regulators set new standards for managing AI model risk in finance

Wednesday, October 22, 2025
Andy Frepp, Interim President | Moody's Analytics
Global regulators set new standards for managing AI model risk in finance

The financial services industry is experiencing significant changes as advanced artificial intelligence (AI) and machine learning (ML) models become integral to key operations such as credit underwriting and fraud detection. This increased reliance on AI/ML has heightened model risk, prompting global regulators to strengthen their oversight of model risk management (MRM).

Recent guidance from organizations like the Financial Stability Institute (FSI) of the Bank for International Settlements and Canada's Office of the Superintendent of Financial Institutions (OSFI) signals a move toward more explicit and comprehensive regulatory frameworks.

OSFI's finalized Guideline E-23 expands the definition of a "model" to include any quantitative tool or algorithm that uses data to generate an output. This now explicitly covers AI-powered systems, including black-box models previously subject to less stringent controls. The guideline, effective May 2027, requires financial institutions to manage model risk across their entire organization using a principles-based approach that is proportional to risk level. It also addresses specific risks associated with AI/ML models, such as explainability, fairness, and bias.

A recent FSI paper discusses the challenge posed by the lack of transparency in many advanced AI models. The paper notes: "Limited explainability severely hinders the management of core model risks," making it difficult for firms to identify bias in training data, monitor for performance degradation over time ("model drift"), and ensure accountability through transparent governance structures.

The FSI paper further warns about systemic risks linked with increased use of third-party AI service providers. Overreliance on a small number of vendors could create concentration risk within the financial system. The report emphasizes maintaining human oversight—"Human-in-Control" frameworks—to mitigate potential automated harm.

Other regulatory bodies are taking similar steps. For example, the Prudential Regulation Authority in the UK has introduced its own MRM principles, while India's Reserve Bank has proposed new guidelines focused on credit risk. In the United States, voluntary frameworks published by the National Institute of Standards and Technology have become widely adopted industry benchmarks for responsible AI practices.

Regulators stress that this shift is not just about compliance but represents a fundamental change in how financial institutions operate amid rapid technological advancement. As noted in current guidance: "Success will hinge not on model prohibition, but on a controlled approach that integrates regulatory guidance." Firms are encouraged to balance high-performing AI with interpretability safeguards and adopt transparent governance mechanisms that allow consumers recourse when affected by automated decisions.

By adopting these measures—including those outlined in OSFI's E-23 guideline (link) and insights from FSI papers (link)—financial institutions can better manage risks while supporting innovation and maintaining public trust.

###

500 - Internal Server Error

Looks like something went wrong!

Error 500: We apologize, an error has ocurred.
Please try again or return to the homepage.

Return to Homepage