Artificial Intelligence May Change How Financial Crises Emerge, ECB Study Finds

A new ECB-led study finds that artificial intelligence could reshape financial crises, with different AI systems behaving in dramatically different ways during market stress. While some AI models tend to panic and trigger excessive withdrawals, others struggle to coordinate, making financial markets less predictable.


CoE-EDP, VisionRICoE-EDP, VisionRI | Updated: 10-05-2026 10:00 IST | Created: 10-05-2026 10:00 IST
Artificial Intelligence May Change How Financial Crises Emerge, ECB Study Finds
Representative Image.

Artificial intelligence is rapidly becoming part of modern finance, from hedge funds and trading firms to retail investment apps used by ordinary investors. But a new study by researchers from the European Central Bank, Stanford University, Deutsche Bundesbank, Stanford HAI, the Centre for Economic Policy Research, and the University of Naples Federico II warns that AI could also create new threats to financial stability.

The study examines how AI systems behave during moments of financial stress. The researchers argue that the future stability of markets may depend not only on economic conditions, but also on the specific type of AI making investment decisions.

Their central finding is simple but striking: different AI systems react to financial uncertainty in completely different ways, and some may actually increase the risk of panic-driven market behaviour.

Two Types of AI, Two Very Different Behaviours

The researchers tested two major kinds of AI investors in a simulated financial market.

The first type was Q-learning algorithms, a form of reinforcement learning already used widely in algorithmic trading. These systems learn through trial and error, adjusting their behaviour based on rewards and losses over time.

The second type was large language models, or LLMs, similar to the AI systems powering advanced chatbots and reasoning tools today. Unlike Q-learning systems, LLMs do not slowly learn through repeated experience. Instead, they analyze situations and make decisions using reasoning and context.

The researchers placed both AI systems inside a classic “mutual fund redemption” model. In this setup, investors must decide whether to keep their money invested or withdraw it early. Their decisions depend not only on economic fundamentals, but also on what they believe other investors will do.

The framework mirrors the logic behind bank runs, market panics, and even modern stablecoin collapses.

Why Some AI Systems Become Overly Fearful

The experiments revealed that Q-learning systems were especially vulnerable to panic behaviour.

When the economy looked weak, the AI investors redeemed their money, which matched standard economic theory. But once default risk was introduced, the algorithms began overreacting to bad experiences.

Even rare losses pushed the systems toward excessive caution. Over time, they learned to redeem investments too often, even when the underlying economic conditions remained healthy.

The researchers describe this as the “hot stove effect.” Just as someone avoids touching any stove after being burned once, the algorithms became overly fearful after experiencing losses. Because they focused heavily on actual outcomes rather than long-term expected returns, isolated bad events had a lasting impact on future decisions.

This created a dangerous feedback loop. As more AI investors redeemed early, staying invested became less attractive for others, leading to even more withdrawals and greater instability.

Large Language Models Avoid Panic but Create Another Problem

The large language models behaved differently. Because they reasoned about expected outcomes rather than learning from repeated punishment, they were much less affected by default risk.

However, the LLMs struggled with coordination.

When the market situation allowed for multiple possible outcomes, the AI investors formed different beliefs about what other investors would do. Some expected calm behaviour and stayed invested. Others anticipated panic and redeemed early.

As a result, the market became less predictable.

One of the study’s most innovative features was its analysis of the LLMs’ internal reasoning. The researchers examined the chain-of-thought explanations generated by the AI systems and found that the models were genuinely reasoning through the problem rather than simply repeating memorized answers.

Some focused on payoff comparisons, others relied on worst-case thinking, while some used formal strategic reasoning. The differences in reasoning led to different decisions, even when the AI systems received the same information.

Regulators May Need New Rules for the AI Era

The researchers argue that these findings carry major implications for regulators and policymakers.

Current financial supervision mainly focuses on institutions and market products, not on the architecture of the AI systems operating inside them. But the study suggests that AI design itself may become a source of systemic risk.

The authors call for regulators to collect more information about the types of AI being used across financial markets and to develop safeguards capable of handling machine-driven behaviour at high speed.

They also warn about the rise of what they describe as “vibe investing,” where retail investors rely on AI tools without understanding how those systems make decisions.

Ultimately, the study challenges the assumption that artificial intelligence will automatically make financial markets more rational and stable. Some AI systems may panic too easily, while others may fail to coordinate altogether.

As AI becomes more deeply embedded in global finance, understanding how machines think could become just as important as understanding the markets themselves.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback