This scraping attack exhibited a distinct three-phase pattern, as seen on the graph below (Figure 1). It started on March 3rd with an initial spike on March 5th, transitioned into a sustained high-volume assault from March 5-9th, peaking at 1.35 million blocked requests every two hours, then gradually declined from March 9-15th before stopping abruptly.

Figure 1: Number of malicious requests blocked per 2-hour window
This pattern suggests the attackers tested detection thresholds early, scaled to maximum capacity mid-campaign, then kept at it consistently for days.
At this scale, even partial success would have yielded millions of scraped business listings, user reviews, and rating data worth substantial sums on secondary markets, as this data represents significant commercial value to the review platform’s competitors.
The attack leveraged geographically diverse proxy infrastructure spanning commercial hosting and residential broadband networks.
Five autonomous systems accounted for all malicious traffic:
While the top two ASNs are registered in the United Kingdom and Brazil, the vast majority of their active network infrastructure and IP allocations are physically located in the United States.
This infrastructure mix is deliberate. Hosting providers offer speed and scale, while residential ISPs provide IP addresses that appear legitimate and are harder to block without risking false positives against real users.
DataDome’s intent-based detection engine blocked all 80 million malicious requests throughout the 13-day campaign by identifying multiple threat markers that indicated malicious automated scraping activity rather than legitimate user traffic.
Three primary detection signals provided the strongest evidence of malicious automated activity:
Multiple secondary indicators reinforced the automated nature of the campaign:
Overall, the attack demonstrated intermediate-to-advanced characteristics, including distributed infrastructure, multiple evasion techniques, and challenge-solving capabilities.
However, the prevalence of consistency failures across browser, device, and session attributes suggested the attackers prioritized volume over stealth, likely relying on legitimate-looking but poorly implemented automation tools.
This attack demonstrates what modern scraping operations look like: 855,000 IPs across hosting and residential networks, intermediate evasion techniques, and 13 days of sustained pressure. Traditional defenses like IP blocking and rate limiting can’t keep up.
For example, before DataDome, Coop, a major Swiss e-commerce brand, was facing a heavy load on its servers due to scraping bots that significantly slowed page loading times:
“Our IT teams were burdened with the manual task of analyzing traffic to identify and block bad IP addresses, which was time-consuming and inefficient, as blocking an IP only provided temporary relief before bots would reappear using new addresses,” said Tobias Schläpfer, Web Applications Developer & Manager of Bot Protection at Coop.
After adding DataDome to its tech stack, Coop saw immediate improvements: 25% of the traffic—due to bad bot activity—disappeared, allowing web pages to load faster and improving the site’s SEO performance.
DataDome’s detection engine analyzes 5 trillion signals daily across thousands of AI models, stopping malicious bot and AI agent traffic at the edge to prevent attackers from causing damage to your business.
If your platform faces similar scraping threats, book a demo to see how DataDome can protect your websites, apps, and APIs without adding friction for legitimate users.