Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Celebrity Deepfake Scams Multiply as AI Generation Surges Past 8 Million Files

CBIA Team profile image
by CBIA Team
Feature image
CBIA thanks Willian Justen de Vasconcellos for the photo

Artificial intelligence-generated content is fueling a dramatic rise in celebrity impersonation scams, with victims facing escalating financial and emotional harm. As deepfake videos and cloned voices grow increasingly realistic and accessible, fraudsters are exploiting the trust that audiences place in familiar public figures to execute sophisticated deceptions.

New data reveals the problem is accelerating faster than consumers and platforms can adapt. Manipulated content now circulates across social media feeds, private messages, and online advertisements, making it increasingly difficult for people to distinguish authentic content from synthetic creations.

Background and Context

According to DeepStrike's Deepfake Statistics 2025, global deepfake production was projected to exceed 8 million files in 2025, representing a sixteenfold increase since 2023. What began as niche technology has evolved into a mass-scale tool capable of generating convincing audio, video, and images within minutes.

Europol has separately warned that by 2026, as much as 90 per cent of online content could be synthetically generated. This volume makes traditional verification methods increasingly unreliable, particularly when content appears in trusted contexts such as celebrity interviews, endorsements, or private messages.

Researchers note that the abundance of public footage of celebrities makes them ideal targets. High-quality source material allows scammers to train AI models that closely replicate real voices and facial movements, blurring the line between authentic media and manipulation.

Key Figures and Entities

Security experts and specialized detection firms are tracking the evolution of these scams. Olga Scryaba, AI Detection Specialist and Head of Product at isFake.ai, observes that modern operations use coordinated AI systems that adapt based on victim responses. "We're seeing scams shift from isolated impersonations to coordinated AI systems that learn and adapt," Scryaba explains.

The McAfee Labs 2025 Most Dangerous Celebrity: Deepfake Deception List identifies the most frequently exploited public figures, with Taylor Swift topping the rankings, followed by Scarlett Johansson, Jenna Ortega, and Sydney Sweeney. The firm's first Influencer Deepfake Deception List indicates similar abuse spreading across social platforms, suggesting the threat extends beyond Hollywood.

Documented cases include an AI-generated impersonation of Steve Burton, known for his role on General Hospital. Scammers used synthetic video and cloned voice messages to convince a fan she was in a private relationship with the actor, ultimately resulting in transfers exceeding £63,000 ($80,000) before the fraud was discovered.

Modern celebrity scams typically employ multiple AI systems working in concert. One tool identifies potential victims, another generates deepfake video or audio, while a third refines messages based on responses. Fraudsters utilize "persona kits" that bundle cloned voices, synthetic faces, and fabricated backstories, reducing technical barriers and enabling scams at scale.

Financial transactions in these schemes often bypass consumer protections through unconventional methods. Gift cards, cryptocurrency, and bank-linked transfers feature prominently across documented cases. Scammers typically establish relationships with victims over weeks or months to build trust and reduce suspicion.

The fraudulent content increasingly appears in environments designed for rapid consumption. "AI content is published and consumed in spaces built for speed and emotional engagement," Scryaba notes. "People scroll without stopping to fact-check, and over time they stop questioning authenticity altogether."

International Implications and Policy Response

The rapid expansion of synthetic media presents significant challenges for regulators and technology platforms. Current verification systems struggle to keep pace with the volume and sophistication of deepfake content, leaving gaps that fraudsters readily exploit.

The cross-border nature of these scams complicates enforcement efforts. As content generation becomes increasingly decentralized and automated, traditional jurisdictional approaches prove inadequate for addressing the scale of the problem.

Experts suggest that solutions will require coordinated action across technology companies, financial institutions, and regulatory bodies. "As synthetic content becomes more common, verification has to become a habit," Scryaba warns. "The cost of assuming something is real is simply too high."

Sources

This report draws on DeepStrike's Deepfake Statistics 2025, Europol's synthetic content warnings, McAfee Labs research, and case documentation from cybersecurity analysts tracking AI-enabled fraud between 2023 and 2025.

CBIA Team profile image
by CBIA Team

Subscribe to New Posts

Lorem ultrices malesuada sapien amet pulvinar quis. Feugiat etiam ullamcorper pharetra vitae nibh enim vel.

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More