By monitoring the activity of web crawlers, in four percent of the cases the crawlers were impostors with fake user-agents that allowed them to pass as Googlebots, a new report said.
Googlebots are crawlers that scour websites for information that can index for the search engines. They generally have limited access, since some parts of some websites contain sensitive information they should not have access to.
The spoofed Googlebots end up used by bad guys to carry out malicious activities, said researchers at security firm, Incapsula.
It appears 34.3 percent of the fake crawlers were explicitly malicious, most of them (23.5 percent) being used for application layer DDoS attacks, while the rest dealt with scraping, spamming and hacking, the report said.
The research also found 65.7 percent had purposes far from being malicious, but still worrying, considering they collected information used for marketing purposes.
Mitigating an application layer DDoS attack is not too easy, because the requests end up directed at the application interface and mimic legitimate behavior, which makes filtering out the bad traffic more difficult.
Website administrators that do not have solutions that can distinguish the fake web crawlers from the legitimate ones face a problem because they either risk loss of traffic by blocking the bots or suffer downtime by allowing them.
“Fake Googlebot visits originate from botnets – clusters of compromised connected devices (e.g., Trojan infected personal computers) exploited for malicious purposes,” the researchers said.
The researchers identified the top four sources of such botnets in the U.S. (25.16 percent), China (15.61 percent), Turkey (14.7 percent) and Brazil (13.49 percent).
During the study, the researchers observed 400 million search engine visits to 10,000 sites over a period of 30 days. The company identified 4 percent of all the bots as being fake ones.
Click here for more information.