In my post last week, I reviewed five technologies I'm thankful for. As I mulled over what to include, I strongly considered not putting virtualization on the list. I'm thankful for it, but it's become such an integral part of delivery x86-based services in data centers of all shapes and sizes that I wasn't certain it was worth a mention. From my perspective, it's sort of like making a point to be thankful for intermittent windshield wipers or sliced bread -- both are great, but they've become so commonplace that few of us really notice their presence.
Of course, realizing that not everyone shares your perspective (however mainstream you might think it to be) is one of the fun parts of being human. Only a few days after that story ran, I found myself on a conference call with a client and the hardware team from one of its primary software vendors. The goal of the conference call was to sort through a pair of technology bids the vendor had submitted for a pending upgrade of a mission-critical business application. One of the bids would see the application upgraded onto new, nonvirtualized hardware, while the other included virtualizing pretty much everything except for the back-end database layer.
At first, this was refreshing -- this vendor had previously been very resistant to supporting virtualization. Outside of the servers supporting this application stack and a few high-utilization servers that'd be difficult to virtualize, the client's data center is almost entirely virtualized. Being able to virtualize the servers for the upgraded application in the same manner as the rest of the infrastructure would have been an obvious win.
Sadly, whatever optimism I might have had before the call started was fleeting.
The physical technology bid included eight physical servers, consisting of a typical series of redundant load-balancing, Web, application, and database tiers. However, the virtualized design consisted of no fewer than 11 physical servers: nine high-end virtualization hosts and the same pair of database servers from the physical design. The virtualized design had more than double the computational capacity and dwarfed the cost of the nonvirtualized design.
When virtualization was young and not well understood, anyone could be forgiven for not believing that you could consolidate large numbers of physical servers into a much smaller number of virtualized workloads. But today, it's very difficult to understand how some of the largest software vendors remain ignorant to that simple truth in the face of overwhelming evidence to the contrary.
Vendors vexed by virtualization
To be honest, I have no idea what it will take for vendors to understand what virtualization can do for their clients, not to mention their own ability to deliver and support their software. Athough I don't know for sure, I'd bet a sizable sum of money that this company and many others like it use virtualization heavily in the course of developing their products, and somewhere in their organizations, people understand what it's all about. Why it doesn't translate to the folks designing and selling customer environments is a mystery to me.
A pessimist might suggest it's simply a way to continue to sell hardware to clients who don't need it. That may be true to some extent, but I'd be surprised if that's the whole reason. With these kinds of applications, the cost of the software itself and the associated third-party licensing far exceed the cost of the hardware in comparison. Moreover, a customer left with a massively overdesigned and underutilized infrastructure will typically feel very nearly as ill-treated as one plagued by performance problems resulting from lowballed hardware specs.
Although I don't know what will make these software vendors realize that virtualization can be as good for them as it is for their customers, I do know it's important to be aware of this attitude among vendors when you're selecting new software.