Finance

Global regulators trail banks in AI as Mythos raises oversight concerns, report finds

Published by Global Banking & Finance Review

Posted on April 28, 2026

3 min read

· Last updated: April 28, 2026

Add as preferred source on Google
Global regulators trail banks in AI as Mythos raises oversight concerns, report finds
Global Banking & Finance Awards 2026 — Call for Entries

Global regulators trail banks in AI as Mythos raises oversight concerns, report finds

By Phoebe Seers

Regulators lag behind banks in AI adoption and oversight

LONDON, April 28 (Reuters) - The ability of central banks and financial regulators to monitor and combat the risks posed by powerful artificial intelligence models such as Anthropic’s Mythos has been called into question after a survey found authorities significantly lag financial firms in AI adoption and lack data on emerging harms.

Survey findings highlight adoption gap

Financial institutions are adopting AI at more than twice the rate of their supervisors, with just two in 10 regulators reporting "advanced AI adoption", research published on Tuesday by the Cambridge Centre for Alternative Finance showed. Only 24% of authorities surveyed collect data on industry AI adoption, while 43% have no plans to start within the next two years, the report found.

Data blind spots and oversight challenges

“This empirical blind spot may undermine the prevailing optimism [on AI]. Authorities cannot successfully harness or oversee AI if they are navigating its adoption and risks without hard data,” the report said.

Scope and methodology of the research

The research, prepared alongside the Bank for International Settlements, the International Monetary Fund and other multilateral institutions, involved surveying 350 traditional financial institutions and fintechs, more than 140 AI vendors, and 130 central banks and financial authorities spanning 151 countries. 

AI risks and regulatory response

Regulators and global standard‑setting bodies have stepped up warnings about the risks posed by the rollout of AI across the financial sector. Earlier in April, Anthropic released Mythos, viewed by cybersecurity experts as posing significant challenges to the banking industry and its legacy technology systems.

Legacy systems and emerging threats

Regulators across the globe have engaged with banks over how prepared their legacy systems are for emerging frontier AI models. 

Mythos and next-generation AI concerns

The report highlights Mythos as an example of next‑generation systems that could soon be capable of exploiting software vulnerabilities at scale, potentially limiting the effectiveness of existing human governance and oversight mechanisms.

“Regulators generally maintain the principle that financial firms should remain accountable for harms, including cyberattacks, whether AI is built in-house or supplied by third parties, but that position becomes harder to apply in the context of more autonomous systems that are provided and managed by third-party vendors,” the authors wrote.

Call for agentic AI adoption by regulators

The report says regulators must themselves adopt agentic AI capabilities,  capable of taking actions without human oversight, to match the systems they oversee.

Harish Natarajan, practice manager for competitiveness and innovation at the World Bank, said at an event to launch the report that authorities in emerging market economies often lack the data and skills needed to embed AI.

Concentration risk in AI providers

CONCENTRATION RISK

The report also flagged concerns about the financial sector's growing dependence on a handful of powerful AI providers. 

Reliance on major AI vendors

It found that 69% of all respondents rely on OpenAI, rising to 76% among the industry, creating what it described as a “notable critical third-party risk consideration” that could expose the global financial system to resilience vulnerabilities, pricing shocks or supply disruptions.

Market share of leading AI models

At the time of the survey, conducted between October 2025 and January 2026, just over half of respondents used Google’s models and a little more than a third used Anthropic.

(Reporting by Phoebe Seers; Editing by Tommy Reggiori Wilkes/Keith Weir)

Key Takeaways

  • Financial firms adopting AI over twice as fast as regulators; only 20% of regulators report advanced AI use, and 43% have no plans to collect adoption data in next two years.
  • Anthropic’s Mythos Preview can chain zero‑day exploits autonomously and has already uncovered thousands of high‑severity vulnerabilities, alarming banks and prompting urgent regulatory engagement.
  • Regulators need to modernize: traditional oversight methods fall short—there’s a call for agentic AI deployment by regulators themselves to match the capabilities of models like Mythos.

Frequently Asked Questions

How far behind are regulators in AI adoption compared to banks?
Financial institutions are adopting AI at more than twice the rate of their supervisors, with only 20% of regulators reporting advanced AI adoption.
What concerns does the Mythos AI model raise for financial regulators?
Mythos is seen as posing significant challenges to banking systems, potentially exploiting software vulnerabilities and testing the limits of current oversight.
Why is regulator data collection on AI adoption a concern?
Only 24% of authorities collect data on industry AI use, and 43% have no plans to start within two years, creating a knowledge gap about emerging risks.
How are traditional oversight methods challenged by AI advances?
More autonomous, third-party AI systems may make traditional governance less effective, requiring regulators to adopt agentic AI tools to keep up.
What is the current regulatory principle regarding financial firm accountability?
Regulators maintain that financial firms are accountable for AI-related harms, but enforcing this is harder with highly autonomous, third-party AI systems.

Tags

Related Articles

More from Finance

Explore more articles in the Finance category