Beyond the Hype: How Supervised Autonomy is Reshaping the Fight Against Financial Crime
Public narratives increasingly frame "Agentic AI" as a shift toward fully autonomous systems capable of making independent, high-stakes decisions. However, according to experts in the field, this framing oversimplifies the operational realities of fraud prevention in regulated financial environments. The industry is moving toward a model of "supervised autonomy"—a hybrid where AI accelerates workflows but human experts retain ultimate responsibility for outcomes.
Background and Context
Financial crime has evolved into an industrialised ecosystem. Organised fraud networks now leverage automation, synthetic identities, and AI-assisted manipulation to scale deception across channels. As real-time payment rails and digital-first customer journeys compress response windows, fraud prevention has moved from a back-office function to a real-time operational necessity.
The central challenge for institutions is no longer merely improving model accuracy, but building resilience. Fraud actors exploit the temporal mismatch between the speed of instant money transfers and the latency of traditional, fragmented risk intelligence. Funds move instantly, but the insight required to stop them often arrives later, assembled incrementally across separate monitoring engines and historical data stores.
Key Figures and Entities
Pedro Barata, Chief Product Officer at Feedzai, argues that the practical application of AI in banking is not about unrestricted autonomy, but about solving coordination problems. He suggests that the industry is shifting toward "supervised autonomy," where AI agents function as integrators rather than replacements for human judgment. According to Barata’s analysis, the goal is to use AI to bridge gaps between fragmented systems that were never designed for seamless integration.
Legal and Financial Mechanisms
The operational reality of anti-fraud teams involves navigating fragmented information—transaction monitoring outputs, behavioural signals, and device intelligence that reside across multiple systems. Agentic AI delivers value by embedding into these infrastructures to support continuous context assembly. Instead of exposing analysts to raw event streams, these agents organise signals dynamically to produce structured summaries.
In this architecture, "Explainable AI" is a functional prerequisite rather than a theoretical concept. When risk signals are surfaced in real time, the rationale behind them must be accessible, reproducible, and audit-ready. Without this transparency, acceleration merely amplifies opacity. This approach is described as "additive, not disruptive," designed to augment mature infrastructures by layering intelligence across existing case management workflows without altering core infrastructure.
International Implications and Policy Response
The expansion of digital payments and embedded finance has increased the exposure surface for fraud, requiring institutions to balance speed with oversight. In this adversarial environment, governance is treated as a performance discipline. Supervised autonomy depends on an end-to-end risk lifecycle where every action is logged and reviewable, directly contributing to operational durability.
A critical aspect of this model is the reduction of false positives. In digital financial services, trust erodes not only when fraud succeeds but also when legitimate customers are incorrectly flagged. By clarifying risk drivers and helping prioritise alerts, supervised autonomy aims to reduce unnecessary customer friction while maintaining detection standards, ensuring that operational models match the velocity of digital money.
Sources
This report draws on industry analysis and insights provided by Pedro Barata, Chief Product Officer at Feedzai, regarding the application of Agentic AI and supervised autonomy in financial crime prevention.