Antivirus vendors are closer to agreeing on a new way to test their software after widespread agreement that older antivirus tests can be misleading.
AV-Test.org, a German antivirus testing organization, is meshing suggestions from vendors such as Symantec, Panda Software, and Trend Micro as well as its own for a new testing regime, said Maik Morgenstern, who conducts product tests at AV-Test.org.
The new testing proposal -- also supported by vendors Kaspersky Lab and F-Secure, as well as other testers such as Virus Bulletin -- will be presented next month at the Association of AntiVirus Asia Researchers 2007 conference in Seoul.
Companies supporting AV-Test.org's paper will try to marshal support from other security vendors, said Mark Kennedy, an antivirus engineer with Symantec.
"We believe this is the way tests should be conducted," Kennedy said. "The hope is that other companies will join us."
Still, the proposals will be optional guidelines for antivirus testers, which ultimately can choose to adopt or ignore them.
Antivirus testing groups have typically tested antivirus products by running the detection engine against hundreds of malicious software samples. If the product doesn't detect a sample, it gets a lower ranking. The style of evaluation tests whether an antivirus product has the right "signatures," or indicators that can identify a specific piece of malware.
The test is relatively quick and easy to perform. But over the last three years or so, many security companies have added technology that can flag malware based on how it acts. That's because signatures have become a less reliable way to defend a computer due to the high number of malware variations that now appear on the Internet.
A signature test does not take into account behavioral detection technology, so vendors have argued that a failed signature test doesn't mean their product wouldn't have protected a PC.
Software vendors have proposed testing antivirus products under the same conditions a consumer would encounter on the Internet. In essence, antivirus testers would use real, active malicious software samples from the Internet and present them to computers in the same way people encounter them, such as through e-mail attachments or Web pages rigged to exploit browser vulnerabilites.
Before a test, antivirus suites would be "frozen" a few weeks prior and not allowed to update their signatures in order to really test the proactive or behavioral technology. Debate is still ongoing whether testers should use malware that is actually doing bad things on the Internet, which poses questions of whether the test machines could potentially do harm.
An alternative is setting up a simulated Internet environment in the lab, but that may not allow malware to run in the way it would if it could access the Internet. "There's always a trade-off," Morgenstern said.
Security analysts are still working on how the products will be scored. It's tricky, since there are many different levels at which a product may detect and neutralize a threat. The scoring has to be clear and comprehensible to people who read technology magazines that write about the tests.
"If the magazines are not able to communicate that in a simple manner to the consumer, then it's not worth much," said Pedro Bustamante, senior research advisor for Panda.
The new parameters mean it will likely take a lot longer to conduct the tests, but Morgenstern said he believed AV-Test.org could do it with their existing staff and without any significant fee increases to publishers who commission work from them.