Google used AI to block over 39 million ad accounts suspected of fraud
In 2024, Google blocked more than 39 million advertising accounts, which is three times more than in 2023. This was the result of an intensified fight against ad fraud. The company actively used artificial intelligence, in particular, large language models (LLM), to detect suspicious activity, such as fake payment data or imitation businesses. Most of the accounts were blocked before the ads were displayed, writes TechCrunch.
Google has implemented over 50 LLM-based improvements to strengthen the security of its platforms. At the same time, the review process actively involved people - over 100 experts from various departments, including Ads Safety, Trust and Safety, and DeepMind. They focused, in particular, on combating deepfake ads, which use images of public figures. The result was the blocking of over 700,000 accounts involved in creating such ads and a 90% reduction in complaints.
The United States saw the most account suspensions, with 39.2 million. Top violations included ad network fraud, trademark abuse, false medical claims, personalized advertising, and gambling. Google also removed 1.8 billion ads in the United States.
The company paid special attention to election advertising - in 2024, it verified more than 8,900 new advertisers and removed 10.7 million election ads. In total, Google blocked 5.1 billion ads, removed 1.3 billion pages, and restricted the display of another 9.1 billion ads.
The company acknowledges that the high volume of blocking raises questions about the fairness of the decisions, so it has implemented more transparent communication with advertisers and an appeals mechanism that includes human review. Google believes that the reduction in the number of harmful ads is evidence of the effectiveness of new approaches to detecting and blocking violations.