AI agents can transform financial crime reporting and oversight

Financial crime compliance has become one of the most expensive operational burdens in global finance. Annual spending now exceeds 200 billion US dollars, yet enforcement outcomes remain weak. Less than one percent of illicit flows are seized worldwide, and suspicious transaction reports often fail to trigger meaningful follow-up.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 18-09-2025 23:43 IST | Created: 18-09-2025 23:43 IST
AI agents can transform financial crime reporting and oversight
Representative Image. Credit: ChatGPT

A team of researchers from Copenhagen Business School and the University of Copenhagen has unveiled a working model of agentic artificial intelligence that could redefine how financial institutions combat money laundering and financial crime. They argue that their approach slashes costs, improves speed, and maintains transparency in a field long plagued by inefficiency.

Their study, Agentic AI for Financial Crime Compliance, published on arXiv in 2025, examines how an agent-based system can automate onboarding, monitoring, and suspicious transaction reporting while staying aligned with strict regulatory requirements. The authors apply an Action Design Research methodology in partnership with a fintech company seeking licensing under the European Union’s MiCA framework and in dialogue with supervisory stakeholders.

Why traditional compliance systems are failing

Financial crime compliance has become one of the most expensive operational burdens in global finance. Annual spending now exceeds 200 billion US dollars, yet enforcement outcomes remain weak. Less than one percent of illicit flows are seized worldwide, and suspicious transaction reports often fail to trigger meaningful follow-up.

Legacy systems depend on rigid rule engines or narrow machine learning models. These approaches generate massive volumes of false positives, add manual workload, and rarely provide the explainability regulators require. The outcome is a compliance function that is slow, costly, and vulnerable to supervisory scrutiny.

This context sets the stage for what the authors describe as a fundamental redesign. They propose a multi-agent architecture that integrates large language model reasoning, traditional rule checks, and built-in guardrails. Instead of producing endless alerts for human review, the system assembles structured, auditable case files that regulators can inspect directly.

How the agentic AI system works

The core of the design is a set of autonomous agents assigned to specific compliance tasks. Separate agents handle onboarding checks, ongoing monitoring, alert triage, investigations, and the final assembly of suspicious transaction reports. These agents communicate with each other through structured handovers and with humans when escalation is needed.

The system is built using OpenAI’s Agent SDK in combination with the n8n workflow platform. It relies on modular orchestration so that predictive models can be added later without disrupting explainability. Risk scores at both transaction and wallet levels guide how agents prioritize their work.

Two additional innovations are central to performance. First, the system keeps complete audit logs, ensuring traceability and accountability. Second, it uses a semantic cache and reinforcement cache to fine-tune thresholds over time, aiming to reduce both false positives and false negatives. The authors emphasize that every output is explainable, aligning the design with model risk management requirements.

For evaluation, the team worked with a domain dataset from the OpenSea non-fungible token marketplace. They analyzed more than 816,000 gaming-related transactions across 859 collections, generating over three million alerts. This high-velocity environment provided a rigorous test of scalability and precision. Four design cycles were completed over eight weeks, each involving rapid prototyping, intervention, and assessment.

What the results mean for compliance and regulation

The outcomes suggest dramatic efficiency gains. The agentic AI system can generate complete suspicious transaction reports in under one minute per case. By comparison, industry benchmarks indicate that case assembly and reporting often take nearly two hours of staff time. The authors estimate that, subject to regulatory validation, their model could reduce reporting effort by more than 98 percent.

Beyond speed, the system produces case files that are auditable and structured to meet supervisory expectations. Each report integrates rule-based signals, behavior patterns, and contextual checks in a standardized format. This makes them more actionable for financial intelligence units and more defensible during regulatory reviews.

The study also distills three design principles for broader application. Compliance systems must embed guardrails and escalation paths to ensure accountability, preserve auditability and explainability at every step, and remain extensible so new models or risk rules can be integrated without compromising oversight.

The authors acknowledge important limitations. The prototype relies mainly on descriptive analytics and rule-based methods rather than advanced predictive modeling. Results are formative, drawn from pilot testing rather than full-scale deployment. Tight coupling in future iterations could also create coordination bottlenecks. Still, the researchers see a clear path for expansion: integrating predictive models under strict governance, testing across multiple jurisdictions, and studying how workforce roles shift when AI handles more of the investigative load.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback