Deepfake Fraud Surge Exposes Gaps in India's Financial Defences
Nearly half of Indian adults have fallen victim to AI voice-cloning or deepfake scams, according to a 2025 analysis that found India's victimisation rate at 47 percent—nearly double the global average of 25 percent. The financial toll has been severe, with 83 percent of Indian victims suffering monetary loss and almost half losing more than INR50,000. As synthetic media technology becomes increasingly sophisticated, India's financial sector faces an unprecedented challenge from deepfake-enabled fraud that threatens everything from individual bank accounts to market stability.
Background and Context
Deepfakes—synthetic media created using deep learning algorithms—have evolved from experimental technology to a weapon of choice for financial criminals. The term combines 'deep learning' with 'fake', encompassing manipulated or entirely fabricated audio, video, and images that can fool even trained observers. In the financial sector, these capabilities enable sophisticated social engineering attacks, identity theft, and large-scale fraud schemes that traditional security measures struggle to detect. The technology's accessibility and diminishing cost have democratized what was once the domain of state actors, putting powerful deception tools in the hands of ordinary criminals.
The threat materialised dramatically in 2024 when staff at a Hong Kong-based engineering firm transferred approximately US$25 million after a video conference with what appeared to be their CFO and colleagues—all actually deepfake creations. Earlier, in May 2023, an AI-generated image showing an explosion near the Pentagon briefly caused the Dow Jones Industrial Average to plunge 85 points within minutes, demonstrating how synthetic media can move markets. These incidents illustrate how deepfakes have transcended theoretical risk to become tangible threats to financial systems worldwide.
Key Figures and Entities
India's response to deepfake threats involves multiple government agencies and legal frameworks. The Ministry of Electronics and Information Technology (MeitY) has issued advisories compelling digital platforms to combat AI-generated misinformation. The Indian Cyber Crime Coordination Centre (I4C) and the Computer Emergency Response Team India (CERT-In) coordinate technical responses and incident reporting. The National Cyber Crime Reporting Portal provides citizens with a mechanism to report deepfake-related fraud, while Grievance Appellate Committees handle disputes regarding platform compliance.
The legal infrastructure rests primarily on three pillars: the Information Technology Act 2000, which criminalizes identity theft and cheating by personation; the Digital Personal Data Protection Act 2023, which addresses privacy violations; and the Bharatiya Nyaya Sanhita 2023, which modernizes provisions against organized cybercrime. Together, these laws create a framework for prosecuting deepfake-enabled financial fraud, though critics note they were not designed specifically with synthetic media in mind.
Legal and Financial Mechanisms
India's approach to deepfake regulation operates through existing cyber and data protection laws rather than dedicated AI legislation. The IT Act 2000 provides the foundation, with Section 66C specifically addressing identity theft and Section 66D covering cheating by personation using computer resources. Section 66E deals with privacy violations arising from capturing, publishing, or transmitting images of private areas without consent—all relevant to deepfake creation and distribution.
The IT Intermediary Rules 2021, amended in 2022 and 2023, impose 'due diligence' obligations on digital platforms to prevent the hosting and sharing of unlawful content, including deepfakes. Rule 3(1)(b) requires platforms to notify users about prohibited content in their preferred language, while subsequent amendments mandate the removal of deepfakes and misinformation within 36 hours of complaints. Non-compliance triggers Rule 7 of the IT Rules 2021, stripping platforms of safe harbour immunity under Section 79 of the IT Act and exposing them to civil or criminal liability.
Financial institutions face particular vulnerabilities as deepfakes undermine biometric authentication systems that many banks have adopted for secure remote access. Research by UK-based tech firm iProov found that 49 percent of respondents reported lower trust in digital services after learning about deepfakes, with 74 percent expressing concern about their broader societal impact. This erosion of trust threatens the digital transformation of India's financial sector, particularly affecting remote advisory services and online onboarding processes.
International Implications and Policy Response
The global nature of digital financial services means deepfake threats transcend national boundaries. India's horizontal regulatory approach—spanning cyber law, data protection, and platform accountability—differs from strategies being implemented elsewhere. The European Union's AI Act proposes a risk-based framework with specific requirements for high-risk applications including deepfake technology, while Singapore has introduced deepfake detection mandates for certain sectors. These international developments offer potential models for India as it refines its approach to synthetic media threats.
The financial implications extend beyond direct fraud to market stability and investor confidence. Empirical evidence shows hostile content can reach large audiences at minimal cost—as low as US$0.07 per view—enabling deepfake-driven narratives to achieve mass-scale proliferation rapidly. In India, this has manifested in deepfake investment advertisements featuring prominent figures like Sudha Murty, with victims reporting substantial losses. Such incidents highlight how synthetic media can weaponize trust in public figures to facilitate financial crimes on an industrial scale.
Sources
This report draws on the 2025 analysis of AI scams in India, Indian legislation including the IT Act 2000 and IT Intermediary Rules 2021, government advisories from the Ministry of Electronics and Information Technology, research by iProov, and documented incidents including the 2024 Hong Kong deepfake scam and the 2023 Pentagon disinformation case. Additional context comes from India's Digital Personal Data Protection Act 2023 and the Bharatiya Nyaya Sanhita 2023.