Speaking at the Financial Stability Oversight Council Artificial Intelligence Series Roundtable on Cybersecurity and Risk Management in Washington, D.C., Vice Chair for Supervision Michelle W. Bowman said that AI will be a transformative technology for the financial sector.
“AI has become an integrated part of our daily experience. Financial institutions are developing their own applications and implementing vendor-assisted tools. Banks of all sizes benefit from its greater efficiency, speed, and content generation,” she said.
“Whether used in targeted modeling or enterprise-wide tools, AI will become a force multiplier for the financial system, and in the broader U.S. economy.”
However, she also discussed the “dynamic nature” of AI with models like Anthropic’s Mythos presenting both risks and benefits.
“The improved ability to identify cyber vulnerabilities comes with the potential to address these weaknesses to enhance cybersecurity. And of course, we have already seen that AI has the potential to improve efficiency and effectiveness, particularly within the financial system.”
Bowman shifted to discussing the actions of the Federal Reserve, which has reportedly been working with banks and monitoring the use of AI as the technology continues to develop and financial institutions look to implement it.
“Over that time, our approach has evolved to increase and enhance our understanding of its application and potential. An important part of our job as supervisors is to ensure that banks are aware of and attentive to the risks and challenges inherent in its use, so it can be deployed responsibly and effectively. And we need to ensure that there is a path for innovation, which includes the use of AI.”
“To mitigate and manage risk, we must understand the specifics regarding the use case for its deployment. Will it be used for material tasks? Is it broadly accessible to employees or limited? And does its use directly affect consumers and customers, as with credit determinations?”
The key focus now as supervisors is to identify and mitigate cases in which AI could creat finance risk.
“The rapid adoption and evolution of its capability reinforces the need for adaptable supervisory guidance and expectations. How should we consider third-party risk-management expectations for vendor-provided AI tools or partnerships? What aspects of model risk management should apply to AI? AI presents clear risks but also has the potential to offer tremendous benefits for cyber security. How should regulators think about this balance of risks?,” Bowman said.
“Our approach should support banks in implementing AI tools safely, effectively, and efficiently. Today, banks are relying on existing risk-management frameworks to guide their use of AI. While these supervisory tools are intended to support banks in applying sound governance and risk management, we should assess whether our supervisory guidance is fit for the future.”
Bowman also commented on international engagement and how the developing technology can impact financial institutions globally.
“One aspect of our regulatory work is ensuring consistency and a level playing field for our internationally active institutions. In this regard, in my role as the chair of the Financial Stability Board's Standing Committee on Supervisory and Regulatory Cooperation we are working together to address financial stability issues related to supervisory and regulatory policies,” Bowman added.
Want to see more stories from trusted news sources?Make Cyber Daily a preferred news source on Google.