Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks
Feature image
CBIA thanks Sora Shimazaki for the photo

How EU Rules Challenge Big Tech's Role in Online Financial Scams

CBIA Team profile image
by CBIA Team

New European Union regulations now hold major technology platforms accountable for financial scams proliferating on their services, fundamentally challenging the business models that have allowed fraudulent content to generate substantial advertising revenue. The rules target Google, Meta, and TikTok, requiring them to actively police deceptive content rather than claiming neutral intermediary status.

The legislation marks a shift from treating platforms as passive conduits—similar to telecommunications providers—to recognizing their ability to identify and prevent criminal behavior through sophisticated content analysis and advertising controls.

Background and Context

For years, major tech platforms operated under the protection of intermediary liability shields, arguing they were neutral carriers of user-generated content. This legal framework, established in the early days of the commercial internet, assumed platforms lacked the technical capability or knowledge to monitor content at scale. However, the development of advanced content moderation systems and targeted advertising algorithms has since given these companies unprecedented visibility into what appears on their platforms.

The EU's Digital Services Act, which began implementation in 2024, explicitly requires very large online platforms to assess systemic risks and implement measures to prevent illegal content, including financial scams. The regulation represents the most comprehensive attempt to date to align platform responsibility with their technical capabilities.

Key Figures and Entities

Google's advertising platform has faced particular scrutiny regarding fraudulent services. In Spain and other European markets, thousands of consumer complaints documented scams promoted through Google search ads, particularly in the locksmith sector. According to consumer protection agencies, these advertisements often led to excessive charges, unnecessary services, or outright fraudulent business practices. Following extensive media coverage and regulatory pressure, Google suspended locksmith advertising in multiple countries in 2024.

TikTok has emerged as another platform where financial scams proliferate rapidly. A 2024 investigation by Bitdefender in partnership with the Better Business Bureau identified what researchers termed "FraudOnTok"—a range of investment scams promising unrealistic returns through cryptocurrency and other speculative ventures. The report documented how scammers use cloned applications, deepfake technology, and influencer impersonation to lend legitimacy to fraudulent schemes.

Meta's platforms, particularly Facebook and Instagram, continue to host various forms of financial deception despite years of content moderation investments. The company's transparency reports indicate removal of millions of fraudulent ads, yet consumer advocacy groups suggest these actions represent only a fraction of the problem.

The new EU regulations require platforms to implement "duty of care" obligations, including systematic risk assessments for illegal content and the establishment of accessible complaint mechanisms. Under these rules, platforms must demonstrate reasonable measures to prevent known types of scams from appearing in advertisements or user content. Failure to comply can result in fines up to 6% of global annual turnover.

Financial incentives have historically complicated platform responses to fraud. Advertising revenue from deceptive content directly benefits platform operators, creating a conflict between profit motives and user protection. The EU framework attempts to address this by making illegal content prevention a legal requirement rather than an optional feature. Regulators can now audit platform systems and enforce specific technical measures when voluntary efforts prove insufficient.

International Implications and Policy Response

The European approach has influenced regulatory discussions worldwide. The United Kingdom's Online Safety Act incorporates similar provisions requiring platforms to prevent fraudulent content, while U.S. lawmakers have debated updating Section 230 communications decency protections. The global nature of digital platforms means regulatory changes in major markets often trigger policy adjustments worldwide.

Consumer protection agencies across Europe have coordinated enforcement actions against tech companies regarding scam content. The European Consumer Protection Cooperation Network has conducted numerous investigations into misleading online advertisements, resulting in coordinated actions against platforms and advertisers alike.

Sources

This report draws on EU regulatory documents, platform transparency reports, consumer protection agency investigations, and independent research including the Bitdefender and Better Business Bureau study on TikTok scams. Additional information comes from company announcements regarding policy changes and parliamentary proceedings on digital platform regulation.

CBIA Team profile image
by CBIA Team

Subscribe to New Posts

We Never Sell or Share Your Infomation

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More