From DDoS attacks to ad fraud: Smarter bots are copying human behavior
Major websites and internet services have been caught up in an epidemic of fake traffic, an expensive problem for digital operators that also threatens to undermine trust in legitimate areas of the web.
Innovative scammers located throughout the world are constantly developing new ways to falsify web traffic, directing unwitting users’ internet connections to ads that may or may not actually exist. In these ad fraud campaigns, thieves traditionally would use automated bots to artificially inflate website traffic in schemes that allowed website operators to profit from higher advertising revenue.
Security teams could once easily detect bot traffic by identifying visitors engaged in anomalous behavior, such as opening and closing windows millions of times. Now, ad-fraud scammers are using more advanced technology that more closely resembles actual human activity, making it far more difficult for digital crime-fighters to stop it. Gone are the days when scammers simply would only use networks of hacked computers to knock websites offline with distributed denial-of-service attacks.
“What’s new is that they’re doing this in a way where the bots are moving the mouse,” said Jerome Segura, lead malware intelligence analyst at Malwarebytes. “It is now well-planned efforts that build on known techniques. As a user, you might be on your computer and not even know [attackers] had hidden a window on your computer to do this stuff.”
The problem’s scope remains difficult to quantify, but it’s clear the issue is getting tougher to deal with.
Last month, Google removed from its Play Store 22 applications that included malware in seemingly legitimate programs, like Sparkle Flashlight, which had been downloaded more than 1 million times. The malware would secretly open an unnoticeable window that repeatedly clicked on ads, a technique that would increase the value of those ads while draining the an affected smartphone’s battery.
U.S. prosecutors last month charged eight individuals from Russia and Eastern Europe with overseeing a complicated ad-fraud scheme that used more than 5,000 fabricated websites and hundreds of thousands of stolen computers to falsify billions of page views. Two distinct groups, known as “Methbot” and “3ve,” shared intelligence on emerging fraud techniques to siphon more than $30 million from legitimate advertisers over a multi-year span, prosecutors said.
Researchers who investigated the Methbot/3ve case discovered the groups were working together to control massive networks of hacked computers, an indication scammers are advancing, said Tamer Hassan, chief technology officer and co-founder of White Ops, a bot detection company that was involved in the investigation.
They also used machines with no history of malicious activity, which are more difficult to detect, and intermingled malicious bots with real users. Such techniques typically are enough to evade detection from poplar security tools, Hassan said. Growing adoption of artificial intelligence also could make bots more difficult to detect.
“There’s a lot of things you can do with a bot when you look like a million people online,” said Hassan.
Bot activity traditionally has been motivated by profit, and it might already influence business decisions by generating inauthentic traffic. Researchers have uncovered ways bots could lead to large music royalty payments by using bots to inflate the number of times a song is played on Spotify, or drive up prices on online ticket sites.
But Hassan predicted scammers will soon make the leap to use bots to influence human behavior as they did in 2017 during a political debate over net neutrality. Many of the comments submitted to the Federal Communication Commission in fact were filed with fake email addresses as part of a plan to interfere with the political process, according to research from Stanford University.
Those bots were detected because they used simple techniques. Had they utilized the same methods as the Methbot or 3ve scammers, Hassan said, it’s unlikely the public ever would have known the attack occurred.
“That might have been one of the most important public debates of our generation,” he said. “We’re at a point where a lot of the metrics and measurements … are used to make big decisions where a lot of money is being spent. But if a bot takes our device, and assumes our own characteristics, it makes detection very difficult.”