Test Center guide: Mail security appliances
Mail security solutions differ in anti-spam techniques, accuracy, false positive rates, and ease of setup and administration. We compare Barracuda, BorderWare, Cisco IronPort, Mirapoint, Proofpoint, Secure Computing IronMail, Sendio, Symantec, and Tumbleweed
The results chart shows that some appliances received a smaller number of spam messages, from 1,969 at the lowest, between 5,000 to 6,000 at the middle of the pack, and more than 10,000 for two products. This disparity in numbers of spam received is due to the fact that all of the appliances reject varying amounts of spam without accepting and filtering it, based on the sender's IP address and other factors. The average number of spam messages sent to the mail server is about 13,000 to 14,000 per two-week evaluation period. The number of messages caught by pre-filtering varied from 3,000 to 4,000 for the Proofpoint and Tumbleweed products, to 10,000 for the Barracuda.
Comparing the filtering rates is not terribly important. Only two solutions scored less than 95 percent: the Cisco IronPort and Barracuda Spam Firewall appliances. The Cisco, at 93.4 percent, and the Barracuda, at 88.4 percent, still fall well within useful catch rates. More important in terms of impact on users is the percentage of false positives, which is excellent in the case of Cisco IronPort, and not so good for the Barracuda.
Because e-mail retention policies may require that any mail received be archived, appliances that reject spam without receiving it – by refusing the sender's invitation to communicate – can dramatically reduce the amount of traffic on the internal network and the load on the appliance itself. It also reduces the amount of mail that must be archived for e-discovery or other requirements.
In some instances, the messages that are rejected are logged, in which case you might want to follow the logs for a couple of weeks to ensure that no real messages are being rejected. With other products, there's no way to know what's being rejected; you simply have to trust that the pre-filtering mechanism is not rejecting messages from legitimate senders.
In addition to testing anti-spam performance, I tested each product with a stream of current viruses provided by two anti-virus vendors, then tested all mail that wasn't stopped with four different anti-virus clients. The good news here is that none of the appliances allowed any viruses through, or at least none that were detected by any of the four anti-virus engines.
In addition, I looked at anti-phishing and anti-malware performance. The news here is not so good; the anti-phishing filters stopped between 51 and 82 percent of phishing messages, and often blocked legitimate messages from potential phishing targets. For example, some filters failed to block bogus messages that purported to come from www.citibank.com, and blocked legitimate messages from another bank.
Finally, I looked at secure content management capabilities. This is difficult to measure quantitatively, because filtering on keywords tends to either work or not work. However, there are some important differences among the products, principally in the number of different types of files that can be scanned, especially zip archives and other compressed files and archives, and their handling of encrypted files. Some products can detect encrypted files and either hold them for inspection by an administrator before allowing them through or at least keep a copy for later inspection.