Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

AI deepfake scams industrialise fraud, cost UK billions

CBIA Team profile image
by CBIA Team
Feature image
CBIA thanks Alessia Lorenzi for the photo

Artificial intelligence is transforming online fraud into an industrial-scale operation, with deepfake technology enabling criminals to impersonate trusted figures at unprecedented speed and low cost. The warning comes from Monica Eaton, Founder and CEO of Chargebacks911, as UK consumers lose an estimated £9.4 billion annually to scams, exposing vulnerabilities in digital payment systems and verification processes.

Background and Context

Deepfake technology has evolved beyond celebrity hoaxes and political misinformation into a sophisticated tool for financial fraud. Synthetic media now powers impersonation attacks targeting consumers, merchants, and financial institutions through convincing audio and video forgeries. These scams typically begin with messages or calls appearing to originate from trusted sources, often featuring fabricated endorsements or counterfeit customer service interactions designed to manipulate victims into transferring funds or revealing sensitive information.

The threat landscape has shifted dramatically as attackers leverage AI to generate thousands of personalised fraudulent messages in minutes. According to industry assessments, this automation of deception has fundamentally altered the economics of fraud, making large-scale targeted campaigns viable even for relatively small criminal operations. The result is a flood of sophisticated scams that traditional detection systems struggle to identify and intercept.

Key Figures and Entities

Monica Eaton, Founder and CEO of Chargebacks911, has emerged as a prominent voice warning about the industrialisation of AI-driven fraud. Her company specialises in chargeback remediation and first-party fraud prevention, working with merchants across the payments sector to manage disputes and reduce losses. Through its subsidiary Fi911, the firm also provides back-office automation tools for financial institutions, including DisputeLab, a product designed to manage chargeback workflows for acquirers.

Eaton argues that current fraud controls are fundamentally mismatched to the speed and scale of AI-driven impersonation attacks. "When criminals can clone a CEO's voice, fabricate a doctor's endorsement, or generate thousands of personalised investment pitches in minutes, traditional fraud controls cannot keep pace," she said, highlighting how automated trust abuse has become increasingly difficult to detect and prevent.

The payments ecosystem faces mounting pressure as deepfake-driven fraud exploits existing friction-reduction measures. Fast account opening, stored credentials, and streamlined dispute processes—designed to enhance user experience—now create vulnerabilities when attackers present convincing identity signals or manipulate victims in real time. Current verification methods often rely on static information or one-time security steps, including knowledge-based questions, passwords, or cautionary warnings that sophisticated AI systems can bypass or manipulate.

Fraud groups increasingly combine multiple attack vectors in single campaigns. A deepfake call might trigger an authorised push payment scam, lead to account takeover, or result in card-not-present fraud. These interconnected threats then generate disputes, chargebacks, and operational costs across the entire payments chain. The complexity of these attacks requires integrated approaches that combine identity data, device signals, behavioural patterns, and real-time analytics to detect anomalies in user interaction, transaction context, and account activity.

International Implications and Policy Response

The industrialisation of deepfake fraud represents a systemic threat to digital commerce globally, with implications extending beyond immediate financial losses. As AI technology becomes more accessible and sophisticated, the fundamental trust framework underpinning online transactions faces unprecedented challenges. Financial institutions and merchants worldwide are reassessing verification systems, investing in biometric checks, document verification, and cross-channel monitoring to address evolving threats.

Industry responses include enhanced staff training for scam escalation, clearer customer communications about fraud risks, and improved information sharing between internal teams and external partners. However, experts warn that technology alone cannot solve the problem. "The fight against deepfake fraud is not just about better technology. It is about redesigning the entire trust framework of digital commerce," Eaton emphasized, calling for a fundamental rebalancing between customer experience and security resilience in payment systems worldwide.

Sources

This report draws on industry analysis from Chargebacks911, financial sector assessments of UK scam losses, and expert testimony on payment security vulnerabilities. Information includes public statements from fraud prevention specialists and payment industry reports on evolving threat patterns in digital commerce.

CBIA Team profile image
by CBIA Team

Subscribe to New Posts

Lorem ultrices malesuada sapien amet pulvinar quis. Feugiat etiam ullamcorper pharetra vitae nibh enim vel.

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More