I recently reviewed an Internet browsing virtualization product called CheckPoint Zone Alarm ForceField. I've reviewed similar products over the past few years, including Greenborder, Sandboxie (I reviewed it personally, not in a published review), and a few others. The solutions in this product category try to protect the end-user and their computing environment by encapsulating or segregating the Internet browser (and possibly the underlying operating system and e-mail client) from malicious manipulation. Their goal is for all legitimate and user-wanted modifications to be kept permanently, while preventing or removing unwanted malicious modifications.
Although these products try to be helpful, they usually fail quickly under testing and are not highly accurate. Most of the time I can infect or exploit the underlying host system in less than a few minutes of malicious testing, and as such I can't personally recommend products in this category to readers.
Believe it or not, I don't blame the vendors for trying to give it a go in this class of product. Traditional anti-malware products (anti-virus, anti-spam, anti-phishing) are frequently being circumvented by morphing, ever more sophisticated, malware. The vendors are trying to come up with new product types that might be more successful.
The only problem is that these types of solutions are old news, and their inherent issues and challenges have been acknowledged and argued for decades. The overall security description is known as red/green computing. The classic idea is that participating users have a single physical computer with two separate computing environments. The known, clean, trusted computing environment is referred to as the green computer. The untrusted computing environment is known as the red computer. Users should use the green computing environment to do all their normal, trusted, computing (normal business work, e-mail, gaming, and so forth), and use the red computing environment to surf the Web, run new, untrusted programs, and the like. Most red/green scenarios have two separate computing environments within one physical computer, with a toggle button of some type to allow the user to switch between red and green environments. The red and green computing environments should never touch one another or only when the user wants to transfer legitimate data and/or programs from one side to the other.
In the past, red/green computing was always discussed as two separate computing environments inside of one physical computing device. Some classified military installations accomplish this using separate physical hard drives, only one of which can be installed into a single computer at one time. This model required tight controls on when each hard drive was used, and introduced huge problems (such as how to update or patch a stored hard drive) into the environment.
Today, virtualization and emulation products have gone mainstream and are evolving the traditional red/green model into a single, physical host system (the green computer) running a separate virtual machine or emulated environment (the red environment). I find major risk problems with both virtual machine and emulated environments. As I covered in a previous column, in most cases, virtual machine environments increase -- not decrease -- security risk. I have even more issues with limited emulation environments.
Problems with limited emulation protection models
The first problem is how much mimicking the red environment does as compared to the green environment. Limited emulation products only mimic smaller portions of the underlying OS, the ones that the vendor thinks malware will most likely affect or exploit. The problem here is that partial emulation is always partial protection. Without full emulation, it's almost always likely that some malware variant will bypass the limited protection and permanently infect the underlying host environment (the green computer). My testing has shown this to be the case over and over. Even full emulation doesn't guarantee host environment protection, as most virtual machine products have been found to allow guest-to-host exploitation. Other midlevel sandbox environments (such as Sun Java) do little better and have been compromised or exploited repeatedly in the long run.
Another problem is how to save data permanently in the red environment or how to securely transfer data between the red and green environments? Any time you transfer programs or data between the red and green environments, you are opening up a traversal hole for hackers and malware.
Many readers may say they will never transfer from or save data in the red environment, but with such draconian logic, how would you install browser updates, needed security patches, or add-in program updates? How would you save a favorite or bookmark? How would a legitimate cookie be saved? It is the rare computing environment where at least some data elements from the red environment don't need to be permanently saved.
With legitimate Web sites becoming very common vectors for malware infection, how can an end-user decide which Web sites to open up in the green or red environments? When should the user save versus discard a change?
Many limited emulation environment vendors state that their products will automatically determine what should and shouldn't be saved permanently. They state they can tell the difference between something initiated by a legitimate user and something done programmatically by malware. The truth is, they can't do it perfectly. In my testing, every product left behind some malware permanently, and things the user saved or configured manually were deleted.
My favorite story about unwanted leftovers came when I confronted a vendor with malware remnants left behind by their product. The vendor proceeded to tell me that the automated malware program "had mimicked the actions a legitimate user would have done manually, so that's why the limited emulation program left the malware remnants behind."
Many products append to their automatic decision process by letting the user (or admin) manually decide when to reset the limited emulation environment. I chuckle about this every time I test it. First, malware can do what it needs to do (steal and e-mail passwords, for instance) in milliseconds. If it takes longer than that, the malware does it rather silently in most cases. I mean, if the user were capable of consistently recognizing when they were infected by malware, they wouldn't need the emulation protection in the first place!
Even more to the point, a large amount of the malware is installed on purpose by end-users because of social engineering enticements. Last week's column revealed that in many studies, more than 50 percent of end-users, when notified that they are installing malware by anti-malware programs, still install the malware. If the limited emulation environment attempts to keep software intentionally downloaded by end-users, and a lot of that software is malicious, what protection have you gained?
It's because of these flaws, and others, that I cannot recommend limited emulation environments. They are flawed in theory, and in practice, real malware affirms the theoretical conclusions. I'm not saying that a highly accurate limited emulation protection environment can't be created, but I doubt it. Why we keep repeating the same failed techniques and expecting different results is a mystery to me.
Just as strange, after each of my limited emulation protection product reviews, in which I skewered a product and the entire product class, more than a dozen other vendors offered to send me their limited emulation product for testing, hoping that their product will succeed where others have failed. Accordingly, I've decided to review multiple vendor products in an upcoming InfoWorld Test Center review article. I'm hoping to be surprised, but I'm not holding my emulated breath.