Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks
Feature image
CBIA thanks cottonbro studio for the photo

Synthetic Identities: Inside the 2,000% Surge in Deepfake Financial Fraud

CBIA Team profile image
by CBIA Team

Fraud attempts across the financial technology sector have risen by 80% between 2023 and 2025, but a far more alarming trend has emerged beneath the surface. According to research by Signicat, incidents involving deepfake technology have surged by 2,137% over the same two-year period. This exponential rise represents a fundamental shift in criminal methodology: attackers are no longer merely stealing identities, but manufacturing them. As synthetic media becomes indistinguishable from reality, the digital banking infrastructure faces a crisis of verification that threatens the integrity of global financial systems.

Background and Context

Historically, the primary threat to online banking was credential theft—fraudsters stealing passwords, credit card numbers, or other sensitive data to exploit a real person’s account. That dynamic is changing rapidly. As generative AI tools become more accessible, criminals can now generate convincing synthetic faces and biometric data specifically designed to bypass remote onboarding checks. This shift has forced a major Australian bank to issue public warnings regarding scammers using celebrity deepfakes to deceive customers. Researchers estimate that the volume of deepfake videos and images online expanded from roughly 500,000 in 2023 to more than 8 million last year, illustrating how quickly the tools for synthetic identity manipulation are proliferating.

Key Figures and Entities

Industry reports indicate that nearly 50% of companies have experienced fraud involving deepfake audio or video, according to the Regula 2025 Identity Fraud by Numbers report. Alexey Astakhov, Vice President of Engineering at Instinctools, notes that the implications for financial institutions relying on remote verification are severe. “Instead of stealing credentials, they can now generate convincing synthetic faces,” Astakhov says, highlighting that traditional verification models are increasingly obsolete. The Regula report further emphasizes the consumer perspective, revealing that 85% of US consumers believe AI makes scam detection more difficult, while 62% report having either experienced an AI-driven scam or knowing someone who has.

The mechanism driving this fraud is the bypass of legacy security systems. Traditional verification tools rely on rule-based checks or simple biometric comparisons, which are often fooled by the pixel inconsistencies and irregular lighting patterns hidden within high-quality deepfakes. In response, some financial institutions are deploying adaptive AI countermeasures. A case study involving a digital bank in the Netherlands, which processes close to five million applications annually, demonstrated the efficacy of this approach. By implementing a YOLO-based computer vision pipeline, the bank was able to detect 30% more AI-generated fraud attempts while reducing false positives by 60% and lowering manual reviews by 40%. This system analyzes key parts of a document to assign a fraud risk score, filtering synthetic identities without impeding legitimate users.

International Implications and Policy Response

The rapid escalation of deepfake fraud underscores a growing gap between regulatory frameworks and technological capability. As fraud techniques evolve faster than rigid legal systems can adapt, identity security can no longer rely on static verification methods. The US Senate Federal Credit Union has outlined the mechanics of these AI scams and the difficulty in discerning deepfakes from reality. With 97% of consumers citing fraud prevention as a key factor in choosing a bank, the pressure is mounting on fintech leaders to balance speed, security, and trust. The failure to address these synthetic identity risks threatens not only individual financial stability but also the foundational trust required for a global digital economy.

Sources

This report draws on findings from the Signicat 2025 fraud research, the Regula 2025 Identity Fraud by Numbers report, and expert analysis provided by Instinctools. Additional context is provided by public warnings from the US Senate Federal Credit Union and independent reporting on banking security.

CBIA Team profile image
by CBIA Team

Subscribe to New Posts

Lorem ultrices malesuada sapien amet pulvinar quis. Feugiat etiam ullamcorper pharetra vitae nibh enim vel.

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More