The rapid adoption of artificial intelligence (AI) technologies poses a growing threat to financial stability — policymakers need to do more to ensure that these risks are properly captured in regulatory frameworks, says the Financial Stability Board (FSB).
In a new report, the global policy group revisited the implications of AI adoption for financial stability, citing the growing use of AI since the FSB first examined the issue in 2017 — and the expanding capabilities of generative AI (GenAI).
“While many financial institutions appear to be taking a cautious approach to using GenAI, interest remains high and the technology’s accessibility could facilitate more rapid integration in financial services,” the FSB said, adding that use by regulators has grown too.
“The fast pace of innovation and AI integration in financial services, along with limited data on AI usage, poses challenges for monitoring vulnerabilities and potential financial stability implications,” it said.
The report warned that, while the technology may offer benefits, it could also amplify existing financial sector vulnerabilities — including a growing reliance on external service providers, market correlation, cyber risk and model, data quality and governance risks too — which could all boost systemic risk as a result.
For instance, the report said, “the widespread use of common AI models and data sources could lead to increased correlations in trading, lending and pricing. This could amplify market stress, exacerbate liquidity crunches and increase asset price vulnerabilities.”
Additionally, the reliance on external providers of specialized hardware, cloud services and models from a small crop of suppliers could expose users — financial firms and regulators alike — to operational vulnerabilities including intensified cyber threats. And, the opacity of AI models and their data poses a risk of generating damaging, inaccurate outputs, commonly known as “hallucinations.”
Additionally, GenAI could be deployed in service of financial fraud and spreading disinformation in financial markets, the FSB said.
“Misaligned AI systems that are not calibrated to operate within legal, regulatory and ethical boundaries can also engage in behaviour that harms financial stability,” it said.
“And from a longer-term perspective, AI uptake could drive changes in market structure, macroeconomic conditions and energy use that may have implications for financial markets and institutions,” it added.
While the financial sector’s existing regulatory frameworks already aim to address these kinds of threats, the FSB report said that more work is needed to ensure that these frameworks are robust enough to cope with the possible intensification of these risks that may follow the increased adoption of AI.
As a result, the report called for regulators and standards setters to assess the adequacy of existing frameworks to address AI-related vulnerabilities at both the domestic and international level. It also called for both groups to facilitate oversight of AI adoption by closing data and information gaps in this area and to enhance their supervisory capabilities by ensuring cross-border cooperation, information sharing and leveraging AI-powered tools.
“Financial authorities face two key challenges for effective vulnerabilities surveillance: the speed of AI change and the lack of data on AI usage in the financial sector,” the report said.
“These developments are not taking place in isolation but rather reinforce existing trends towards greater automaticity and speed in the financial system. They underscore the necessity for authorities to monitor AI developments and related innovations closely and holistically,” it concluded.