Google issued a new study on Wednesday detailing how it is becoming more difficult to identify malicious websites and attacks, with antivirus software proving to be an ineffective defense against new ones.
The company's engineers analyzed four years worth of data comprising 8 million websites and 160 million web pages from its Safe Browsing service, which is an API (application programming interface) that feeds data into Google's Chrome browser and Firefox and warns users when they hit a website loaded with malware.
[ InfoWorld's Roger Grimes says the Internet needs its own Weather Channel to help organizations stay ahead of cyber criminals. | Learn how to greatly reduce the threat of malicious attacks with InfoWorld's Insider Threat Deep Dive PDF special report. | Stay up to date on the latest security developments with InfoWorld's Security Central newsletter. ]
Google said it displays 3 million warnings of unsafe websites to 400 million users a day. The company scans the Web, using several methods to figure out if a site is malicious.
"Like other service providers, we are engaged in an arms race with malware distributors," according to a blog post from Google's security team.
That detection process is becoming more difficult due to a variety of evasion techniques employed by attackers that are designed to stop their websites from being flagged as bad, according to the report.
Other methods include ranking a website by reputation based on its hosting infrastructure, and another line of defense is antivirus software.
One of the ways hackers get around VM-based detection is to require the victim to perform a mouse click. Many sites are rigged to automatically deliver an exploit and execute an attack if an unpatched software program is found.
Google describes it as a kind of social engineering attack, since the malicious payload appears only after a person interacts with the browser. Google is working around the issue by configuring its virtual machines to do a mouse click.
Google is also encountering "IP cloaking," where a malicious website will refused to serve harmful content to certain IP ranges, such as those known to be used by security researchers. In August 2009, Google found that some 200,000 sites were using IP cloaking. It forces researchers to scan the sites from IP ranges that are "unknown by the adversary," the report said.
Antivirus software programs rely on signatures as one method to detect attacks. But the engineers wrote that the software often missed code that has been "packed," or compressed in a way that it is unrecognizable but will still execute.
Since it can take time for AV vendors to refine their signatures and remove ones that cause false positives, the delay allows the malicious content to stay undetected.
"While AV vendors strive to improve detection rates, in real time they cannot adequately detect malicious content," the Google researchers wrote. "This could be due to the fact that adversaries can use AV products as oracles before deploying malicious code into the wild."
The study was authored by Moheeb Abu Rajab, Lucas Ballard, Nav Jagpal, Panayiotis Mavrommatis, Daisuke Nojiri, Niels Provos, and Ludwig Schmidt.
Send news tips and comments to firstname.lastname@example.org.