Audit Industry Confronts Rising AI Fraud Threat as Awareness Gaps Persist
More than one-third of senior internal audit leaders cannot determine whether their organizations have been targeted by AI-enabled fraud, according to a new poll from the Institute of Internal Auditors, revealing a critical awareness gap as artificial intelligence-powered criminal schemes surge globally. The survey found that while 48% of auditors believe their organizations haven't been attacked, 34% remain uncertain and 18% confirm at least one AI fraud incident, raising concerns about undetected vulnerabilities in corporate defenses.
Background and Context
The findings emerge amid a sharp rise in AI-driven criminal activity. According to Ironscales, a cybersecurity firm, 88% of organizations experienced at least one AI-powered security incident in the past year, with nearly 12% facing six or more such attacks. The company's research identified finance professionals as primary targets, with 50% of respondents flagging them as high-risk victims, followed by IT personnel (46.9%) and HR employees (38.3%).
The scope of these threats continues expanding. Cybernews analysis of 346 recorded AI incidents found that 179 involved deepfakes—whether voice, video, or image manipulation—with deepfake technology driving 81% of fraud-specific cases. Meanwhile, the World Economic Forum's 2026 Global Cybersecurity Outlook, based on surveying 873 executives and cybersecurity leaders, revealed that 73% of respondents had been affected by cyber-enabled fraud in the past year.
Key Figures and Entities
The Institute of Internal Auditors' survey highlights how familiarity with artificial intelligence directly correlates with risk perception. Auditors reporting little to no AI familiarity rated their organizational risk at just 2.8-2.9 on a five-point scale. Those somewhat familiar placed it at 3.2, while very knowledgeable respondents scored it at 3.3-3.4. Despite these varying risk assessments, 51% of auditors report at least some familiarity with AI-enabled fraud concepts, with 34% describing themselves as very or extremely familiar with the threat.
When identifying specific risks, auditors overwhelmingly focus on traditional threats: AI-powered phishing attempts (88%), fabricated invoices or financial documents (65%), automated social engineering (58%), deepfake impersonations (45%), and AI-developed harmful code (41%). However, this prioritization may leave organizations exposed to emerging risks, as synthetic identity fraud—cited by only 27% of respondents as a major concern—represents the fastest-growing financial crime in the United States, costing an estimated $5 billion annually.
Legal and Financial Mechanisms
The auditors' responses reveal significant preparedness challenges. Only 2% feel very prepared to handle AI-enabled fraud, while 34% describe themselves as moderately prepared. The majority, 46%, admit to being minimally prepared, and 16% say they are not prepared at all. This inadequate readiness stems from several barriers: lack of appropriate technology or tools (57%), insufficient staff with relevant AI skills (55%), budgeting constraints (46%), competing priorities (43%), and insufficient time for AI-specific risk management (43%).
Paradoxically, many auditors view artificial intelligence as both threat and solution. The survey found extensive AI adoption in reporting and audit planning (35%), risk assessment (25%), and fieldwork (19%). Looking forward, 83% of internal auditors plan to increase their AI usage over the next year, with only 12% expecting current levels to remain unchanged.
International Implications and Policy Response
The WEF's Global Cybersecurity Outlook places AI-enabled fraud within a broader crisis, with 62% of leaders reporting phishing-related attacks and 37% experiencing payment or invoice fraud. Identity theft (32%), insider fraud (20%), impersonation scams (17%), and investment frauds (17%)) round out the threat landscape. These statistics underscore how artificial intelligence amplifies existing vulnerabilities while creating novel attack vectors that traditional security frameworks struggle to address.
The Institute of Internal Auditors' report emphasizes the need for "future-focused knowledge" to strengthen internal audit capabilities. As organizations increasingly deploy AI across operations, the audit function must develop nuanced understanding of the technology's potential misuse—both within internal audit and across the enterprise. This dual awareness of risk and opportunity represents the critical frontier for governance professionals navigating AI's expanding influence on financial crime and corporate security.
Sources
This report draws on survey data from the Institute of Internal Auditors, cybersecurity research from Ironscales, AI incident analysis by Cybernews, and the World Economic Forum's 2026 Global Cybersecurity Outlook. The Institute of Internal Auditors' poll examined AI fraud awareness and preparedness among senior audit leaders across multiple industries and geographic regions.