We generally do a good job of protecting the big items in our infrastructures, applications, and frameworks. We can easily see and block the barbarians at the front door. We protect our networks with firewalls and deep packet inspection. We protect open services with code that identifies and blocks known attacks and brute-force attempts. We compartmentalize larger implementations so that a breach or problem in one doesn’t affect the others.
Frankly, the big objects are the easy part of security. But the tiny, insidious, and completely unforeseen vectors always seem to get us -- like a tiny bit of code that was overlooked for years in OpenSSL or Bash, or to take the latest example, Venom (CVE-2015-3456), which is the hyped name given to the latest threat to virtualized infrastructures.
Venom affects QEMU, KVM, and Xen hypervisors, and it’s more than a minor bug. The upshot: There’s a buffer overflow bug in the virtual floppy disk controller that can allow a bad actor to slip below the surface of the VM itself and access the hypervisor directly. From there, it’s theoretically possible to access other VMs on the compromised host and potentially other hosts running other VMs. Suffice it to say, this situation should never occur -- a VM should never be an attack vector to a hypervisor -- but here we are.
It’s probably safe to say that very few production implementations of Xen, QEMU, or KVM are using the virtual floppy disk features, but sadly that doesn’t matter, because the bug is still accessible regardless of whether the virtual floppy is assigned or in use on that VM. Thankfully, exploiting this bug appears to require root access at the VM level, but for many, that’s small consolation.
Hardest hit by this problem will be service providers running affected hypervisors and providing raw VMs to customers. All of their customers have root access, meaning all of them could potentially exploit this bug and move through the dozens or hundreds of other virtual private servers running on exploited hosts. It’s not a good situation to be in. Further, depending on the breadth and severity of the problem and fix, this may require VPS instance reboots on a massive scale on affected cloud service providers. This is highly reminiscent of the last time Amazon had to reboot EC2. (However, Amazon claims to be unaffected by Venom.)
The only good news: There isn’t yet a known exploit in the wild, but it’s expected we’ll see one sooner rather than later. Thus, if you have critical infrastructure running on a cloud services platform that has this vulnerability, you could potentially be directly affected. Venom is bad news for a number of service providers, and we certainly hope it’s not bad news for their customers.
It’s fittingly ironic that a vulnerability of this nature is vectored through such an innocuous and fossilized function as a virtual floppy disk driver; it's even more ironic that the bug in that code has existed since 2004. As we’ve moved through the past two decades of computing advances, we have seen time and again how legacy dependencies and considerations cause very current problems and vulnerabilities. Every time we mothball an elderly dependency, we increase the security of the overall system. Unfortunately, as time marches on, we’ll never run short of elderly dependencies -- we’re creating them every day.
As with Heartbleed and Shellshock, we can only scramble to address the issue and hope we can contain the damage. Fortunately, Venom isn’t nearly as widespread as the other two, but it still affects huge numbers of infrastructures.
Perhaps the next big vulnerability will be in a parallel printer driver, a fancy mouse driver, or an obscure ArcNet card from 1995. Maybe that’s why we might not want to include everything in our server builds -- but that’s another discussion entirely.