The virtual virtualization case study: Deployment

In stage 5, Fergenschmeir's IT maneuvers through build-out and migration challenges

Stage 5: Deploying the virtualized servers

About a month after the purchase orders went out for the hardware and software selected for the server virtualization project, the Fergenschmeir IT department was up to its elbows in boxes. Literally.

[ Start at the beginning of Fergenschmeir's server virtualization journey ]

This was because server administrator Mary Edgerton ordered the chosen HP c-Class blades from a distributor instead of buying it directly from HP or a VAR and having it pre-assembled. This way, she could do the assembly (which she enjoyed) herself, and it would cost less.

As a result of this decision, more than 120 parcels showed up at Fergenschmeir's door. Just breaking down the boxes took Mary and intern Mike Beyer most of a day. Assembling the hardware wasn't particularly difficult; within the first week, they had assembled the blade chassis, installed it in the datacenter, and worked with an electrician to get new circuits wired in. Meanwhile, the other administrator, Ed Blum, had been working some late nights to swap out the core network switches.

Before long, they had VMware ESX Server installed on nine of the blades, and VirtualCenter Server installed on the blade they had set aside for management.

Unexpected build-out complexity emerges
It was at this point that things started to go sideways. Up until now, the experience Mike had gained working with VMware ESX at his college had been a great help. He knew how to install ESX Server, and he was well versed in the basics of how to manage it once it was up and running. However, he hadn't watched his college mentor configure the network stack and didn't know how ESX integrated with the SAN.

After a few fits and starts and several days of asking what they'd later realize were silly questions on the VMware online forums, Ed, Mary, and Mike did get things running, but they didn't really believe they had done it correctly. Network and disk performance weren't as good as they had expected, and every so often, they'd lose network connectivity to some VMs. The three had increasing fears that they were in over their heads.

Infrastructure manager Eric Brown realized he'd need to send his team out for extra training or get a second opinion if they were going to have any real confidence in their implementation. The next available VMware classes were a few weeks away, so Eric called in the consultant that had helped with capacity planning to assist with the build out.

Although this was a significant and unplanned expense, it turned out to be well worth it. The consultant teamed up with Mary to configure the first few blades and worked with Ed on how best to mesh the Cisco switches and VMware's fairly complex virtual networking stack. This mentoring and knowledge transfer process proved to be very valuable. Later, while Mary was sitting in her VMware class, she noted that the course curriculum wouldn't have come anywhere near preparing her to build a complete configuration on her own. Virtualization draws together so many different aspects of networking, server configuration, and storage configuration that it requires a well-seasoned jack-of-all-trades to implement successfully in a small environment.

Bumps along the migration path
Within roughly a month of starting the deployment, Eric's team had thoroughly kicked the tires, and they were ready to start migrating servers.

Larry had done a fair amount of experimenting with VMware Converter, a physical-to-virtual migration tool that ships with the Virtual Infrastructure suite. For the first few servers they moved over, he used Converter.

But it soon became clear that Converter's speed and ease of use came at a price. The migrations from the old physical servers to the new virtualized blades did eliminate some hardware-related problems that Fergenschmier had been experiencing, but it also seemed to magnify the bugs that had crept in over years of application installations, upgrades, uninstalls, and generalized Windows rot. Some servers worked relatively well, while others performed worse than they had on the original hardware.

After a bit of digging and testing, it turned out that for Windows servers that weren't recently built, it was better to build the VMs from scratch, reinstall applications, and migrate data than it was to completely port over the existing server lock, stock, and barrel.

The result of this realization was that the migration would take much longer than planned. Sure, VMware's cloning and deployment tools allowed Ed, Mary, and Mike to deploy a clean server from a base template in four minutes, but that was easy part. The hard part was digging through application documentation to determine how everything had been installed originally and how it should be installed now. The three spent far more time on the phone with their application vendors than they had trying to figure out how to install and configure VMware.

Another painful result of their naivete emerged: Although they had checked their hardware against VMware's compatibility list during the project planning, no one had thought to ask the application vendors if they supported a virtualized architecture. In some cases, the vendors simply did not.

These application vendors hadn't denied Fergenschmeir support when their applications had been left running on operating systems that hadn't been patched for years, and they hadn't cared when the underlying hardware was on its last legs. But they feared and distrusted their applications running on a virtualized server.

In some cases, it was simply an issue of the software company not wanting to take responsibility for the configuration of the underlying infrastructure. The IT team understood this concern and accepted the vendors' caution that if any hardware-induced performance problems emerged, they were on their own -- or at least had to reproduce the issue on an unvirtualized server.

In other cases, the vendors were ignorant about virtualization. Some support contacts would assume that they were talking about VMware Workstation or Server as opposed to a hypervisor-on-hardware product such as VMware ESX. So they learned to identify the less knowledgeable support staff and ask for another technician when this happened.

But one company outright refused to provide installation support on a virtual machine. The solution to this turned out to be hanging up and calling the company back. This time they didn't breathe the word "virtual," and the tech happily helped them through the installation and configuration.

These application vendors' hesitance, ignorance, and downright refusal to support virtualization didn't make anyone in Fergenschmeir's IT department feel very comfortable, but they hadn't yet seen a problem that they could really attribute to the virtualized hardware. Privately, Eric and CTO Brad Richter discussed the fact that they had unwittingly bought themselves into a fairly large liability, but there wasn't much they could do about that now.

The rest of the virtual virtualization case study
Introduction: The Fergenschmeir case study
Stage 1: Determining a rationale
Stage 2: Doing a reality check
Stage 3: Planning around capacity
Stage 4: Selecting the platforms
Stage 6: Learning from the experience

From CIO: 8 Free Online Courses to Grow Your Tech Skills
Join the discussion
Be the first to comment on this article. Our Commenting Policies