As more and more people continue down the path of virtualizing their environments, one area that often gets overlooked in the process are the needs and the problems associated with storage. To find out more about this topic, I was able to speak with Mark Davis, CEO of Virsto Software, a company expecting to launch its first product by year's end. The company believes that storage is badly broken within the virtual world, and it plans on moving storage into the virtual age to keep pace with servers.
[ Keep up with the latest virtualization news with InfoWorld's virtualization newsletter and visit the InfoWorld Virtualization Topic Center for news, blogs, essentials, and information about InfoWorld virtualization events. ]
InfoWorld Virtualization Report: At one level, virtual servers work just like physical servers. So a lot of people wonder, what's wrong with using the same storage techniques and technologies in the virtual world that we use with physical servers?
Virsto: The beauty of the hypervisor is that anything that worked on physical servers, in theory, will work on virtual servers. It's that promise, combined with the many benefits of virtualization, that makes virtualization so compelling, but saying that old tools designed for physical servers will work for virtual servers is not the same as saying they work optimally. As a VMware founder has been quoted as saying, "virtualization didn't break the applications, but it certainly broke the infrastructure."
Why did it get broken? Because the designers of infrastructure for physical servers made fundamental assumptions that are no longer true in the world of the hypervisor. This is abundantly true in the realm of data storage.
InfoWorld: Can you tell us, then, what are some of the fundamental assumptions about storage and storage management from the physical world that are false for virtual servers?
Virsto: One that is well known is the assumption that the server has plenty of headroom. Backup is a CPU and I/O hog, and that was OK when we were running servers at 7 percent utilization. Now we've consolidated servers and no longer have that headroom, so firing up a backup job can really slow down the app being backed up. But the problem is pernicious, because all the other VMs on the physical server are also impacted. Completely unacceptable, and this isn't the only place where the assumption of unshared resources bites us.
Another now-false assumption is about the nature of OS images. In the physical world, the ratio of OS instances to boxes was 1. VM technology blows this up -- not only with all the live VMs, but all the other images that are suspended, saved for things like backup or rollback. It is not hard to build a datacenter with tens, even hundreds, of thousands of VM images. That sucks up a lot of disk space.
The irony is that the vast majority of the bits inside those images are exactly the same, because the VMs are cloned from a small set of golden masters, and a lot of the bits in a VM don't change as the VM runs. De-duplication could help here, but in this use case, de-dupe is more of a bug than a feature. If the storage infrastructure was built right, dupes wouldn't be created in the first place.
A couple other assumptions vitiated by virtual servers are that storage provisioning is a relatively infrequent activity, and that clustering of servers is rare and usually small scale. In the VM world, every creation of a VM requires provisioning of storage, and to take advantage of features like live migration across physical servers, we want all our servers to be in one big cluster.
One of the assumptions most aggravating to IT managers relates to the nature of hardware. In the old physical world, serious server applications were run on big, proprietary, expensive servers. It made sense for storage to match. Now, servers are small, non-proprietary, and spectacularly inexpensive. In stark contrast, storage systems are eerily reminiscent of the mainframe and minicomputer of yesteryear. The assumption that storage should be proprietary, should tie advanced software features to a particular brand of hardware, and be super expensive doesn't make sense.
InfoWorld: How do these false assumptions cause problems for people running virtual servers?
Virsto: The first symptom is that, all things being equal, virtual servers consume 15 to 25 percent more storage than physical servers. VM sprawl has a direct hardware budget impact, as well as an ongoing operating cost burden.
Second is the VM I/O blender phenomenon. Take a server that can do 1,000 I/O operations per second (IOPS) with one OS running. Now start eight VMs running the benchmark simultaneously. Aggregate IOPS can fall by 80 percent. Across all eight VMs, you're getting around 200 IOPS, or about 25 IOPS per VM. For good reason, IT architects are loath to virtualize I/O intensive applications.
Third, IT operations people find that basic storage management gets much more complex. Old backup techniques break down in a production virtual server environment. Provisioning complexity and performance management become burdensome. The hypervisor has many benefits, but simplifying storage management isn't one of them.
The fourth outcome of invalidated assumptions is that we spend far too much on storage. Some sophisticated IT organizations have done post hoc analysis of their virtualization ROI, and the results are frustrating. As expected, the TCO of servers was significantly reduced. However, incremental spending on storage -- more hardware, higher-end hardware, more array software licenses, and increased operating expenses -- swamped the savings on the server side. In hindsight, server virtualization didn't save them any money. A dirty little secret of the virtualization industry.
InfoWorld: So with that said, what do you believe is the technical solution IT people are using to deal with these problems today?
Virsto: If you've been in IT long enough, you already know the answer. When all else fails, get a bigger hammer.
Many of the issues discussed above can be handled by throwing enough hardware and money at the problem. A cruel paradox of server virtualization is that the more we commoditize servers -- to the great benefit of enterprise and cloud datacenters -- the more we have to invest in mainframe-style storage to take advantage of the promise of virtual servers.
Want to dynamically allocate server workloads? No problem, you'll just need a bulge bracket SAN. Want to deal with the capacity bloat caused by VM sprawl? Just buy a higher-end storage system, load it up with expensive software features -- oh, and you'll need more specialized cache in the array.
The alternative is to scale back on deployment of virtualization. Don't fully load your servers, so you get a lower consolidation ratio. Or leave the I/O-intensive apps on physical servers. Certainly an option, but if the obstacles were removed, IT pros would virtualize a lot more of their apps.
Sometimes, overprovisioning is an acceptable answer. This time-honored technique is a standard part of an IT architect's toolkit. But any time IT pros systematically solve problems by overprovisioning, there's an opportunity to invent a more clever solution. Come to think of it, that's how we got server virtualization in the first place.
InfoWorld: So tell us, in your opinion, how should the industry proceed?
Virsto: IT pros, resellers, consultants, integrators, bloggers, and analysts need to get educated and open a dialog. Vendors love to gloss over the issues or claim that if only you're willing to spend enough, the problems have already been solved. For organizations that can throw money at the problem, that's fine, but what about the rest of us? What's really needed is fundamental innovation in storage for VMs, just as the hypervisor itself was a seminal innovation.
Vendors should be looking for ways to solve the problem without overprovisioning or requiring a particular brand of hardware. Vendors have a natural motivation to not solve this problem because that means you'll buy less of their stuff. But as Michael Dell recently said, "any time a new technology comes along that's good for customers, you get in the way of it at your own peril." It's not for no reason that the man's a billionaire.
Server virtualization is such a brilliant invention. In the early days, the naysayers were many, and traditional server hardware and OS vendors were resistant because it threatened their proprietary franchises. IT pros, on the other hand, didn't care so much about vendor hegemony and instinctively knew virtualization was a better way to build server infrastructure. As technology barriers are knocked down, we reduce the cases where physical servers make more sense than virtual. The next big barriers that should be smashed are in the storage realm.
Thanks again to Mark Davis, CEO of Virsto Software, for speaking with me.
This story, "Q&A: Virsto Software CEO Mark Davis discusses storage in the virtual world," was originally published at InfoWorld.com.