The virtual virtualization case study: Platform selection

In stage 4, Fergenschmeir's IT sorts through its software, hardware, and network choices -- and costs

Stage 4: Selecting the platforms

While the consultant was doing the server utilization analysis to determine which apps could run on virtual servers and which needed to stay on physical servers, the IT team at Fergenschmeir started to think about what hardware would be used as the hosts in the final implementation.

[ Start at the beginning of Fergenschmeir's server virtualization journey ]

The virtualization engine
It was obvious that any hardware they chose had to be compatible with VMware ESX, the virtualization software they had tested, so infrastructure manager Eric Brown’s team started checking the VMware hardware compatibility list. But server administrator Mary Edgerton stopped the process with a simple question: “Are we even sure we want to use VMware?”

Nobody had given that question much thought in the analysis and planning done so far. VMware was well known, but there were other virtualization platforms out there. In hindsight, the only reason Eric’s team had been pursuing VMware was due to the experience that the intern, Mike Beyer, had with it. That deserved some review.

From Eric’s limited point of view, there were four main supported virtualization platforms that he could chose from. VMware Virtual Infrastructure (which includes VMware ESX Server), Virtual Iron, XenSource, and Microsoft’s Virtual Server.

Eric wasn’t inclined to go with Microsoft’s technology because, from his reading and from input from the other server administrator, Ed Blum, who had used Microsoft Virtual Server before, it wasn’t as mature nor did it perform as well as VMware. Concerns over XenSource’s maturity also gave Eric pause, and industry talk that XenSource was a potential acquisition target created uncertainty he wanted to avoid. (And indeed it was later acquired.)

Virtual Iron, on the other hand, was a different story. The two were much closer in terms of maturity, from what Eric could tell, and Virtual Iron was about a quarter the cost. This gave Eric some pause, so he talked over the pros and cons of each with CTO Brad Richter at some length.

In the end they decided to go with VMware as they had originally planned. The decision came down to the greater number of engineers who had experience with the more widely deployed VMware platform and the belief that there would also be more third-party tools available for it. Another factor was that CEO Bob Tersitan and CFO Craig Windham had already heard the name VMware. Going with something different would require a lot of explanation and justification -- a career risk neither Eric nor Brad were willing to take.

The server selection
After the question of platform had been solved, Eric had received the initial capacity planning analysis, which indicated the need for eight or nine dual-socket, quad-core ESX hosts. With that in mind, the IT group turned its focus back to selecting the hardware platform for the revamped datacenter. Because Fergenschmeir already owned a lot of Dell and Hewlett-Packard hardware, the initial conversation centered on those two platforms. Pretty much everyone on Eric’s team had horror stories about both, so they weren’t entirely sure what to do. The general consensus was that HP’s equipment was better in quality but Dell’s cost less. Eric didn’t really care at an intellectual level -- both worked with VMware’s ESX Server, and his team knew both brands. Ed and Mary, the two server administrators, loved HP’s management software, so Eric felt more comfortable with that choice.

Before Eric’s team could get down to picking a server model, Bob made his presence known again by sending an e-mail to Brad that read, “Read about blades in InfoWorld. Goes well with green campaign we’re doing. Get those. On boat; call cell. -- Bob.” It turned out that Bob had made yet another excellent suggestion, given the manageability, power consumption, and air conditioning benefits of a blade server architecture.

Of course, this changed the hardware discussion significantly. Now, the type of storage chosen would matter a lot, given that blade architectures are generally more restrictive about what kinds of interconnects can be used, and in what combination, than standard servers.

For storage, Eric again had to reconsider the skills of his staff. Nobody in his team had worked with any SAN, much less Fibre Channel, before. So he wanted a SAN technology that was cheap, easy to configure, and still high-performance. After reviewing various products, cross-checking the ESX hardware compatibility list, and comparing prices, Eric decided to go with a pair of EqualLogic iSCSI arrays -- one SAS array and one SATA array for high- and medium-performance data, respectively.

This choice then dictated a blade architecture that could support a relatively large number of gigabit Ethernet links per blade. That essentially eliminated Dell from the running, narrowing the choices to HP’s c-Class architecture and Sun’s 6048 chassis. HP got the nod, again due to Mary’s preference for its management software. Each blade would be a dual-socket, quad-core server with 24GB of RAM and 6Gbps Ethernet ports. Perhaps the IT team would increase the amount of RAM per blade through upgrades later if the hosts became RAM-constrained, but this configuration seemed to be a good initial starting place.

The network selection
The next issue to consider was what type of equipment Eric’s team might need to add to the network. Fergenschmeir’s network core consisted of a pair of older Cisco Catalyst 4503 switches, which drew together all of the fiber from the network closets, and didn’t quite provide enough copper density to serve all of the servers in the datacenter. It was certainly not enough to dual-home all of the servers for redundancy. The previous year, someone had added an off-brand gigabit switch to take up the slack, and that obviously needed to go.

After reviewing some pricing and spec sheets, Eric decided to go with two stacks of Catalyst 3750E switches and push the still-serviceable 4503s out to the network edge. One pair of switches would reside in the telco room near the fiber terminations and perform core routing duties, while the other pair would sit down the hall and switch the server farm.

In an attempt to future-proof himself, Eric decided to get models that could support a pair of 10G links between the two stacks. These switches would ultimately cost almost as much as getting a single, highly redundant Catalyst 6500-series switch, but he would have had to retain the massive bundle of copper running from the telco room to the datacenter, or extend the fiber drops through to the datacenter to make that work. Neither prospect was appealing.

The total platform cost
All told, the virtualization hardware and software budget was hanging right around $300,000. That included about $110,000 in server hardware, $40,000 in network hardware, $100,000 in storage hardware, and about $50,000 in VMware licensing.

This budget was based on the independent consultant’s capacity planning report, which indicated that this server configuration would conservatively achieve a 10:1 consolidation ratio of virtual to physical servers, meaning 8 physical servers to handle the 72 application servers needed. Adding some failover and growth capacity brought Eric up to nine virtualization hosts and a management blade.

This approach meant that each virtualized server -- including a completely redundant storage and core network infrastructure but excluding labor and software licensing costs -- would cost about $4,200. Given that an average commodity server generally costs somewhere between $5,000 and $6,000, this seemed like a good deal. When Eric factored in the fact that commodity servers don’t offer any kind of non-application-specific high availability or load balancing capabilities, and are likely to sit more than 90-percent idle, it was an amazing deal.

Before they knew it, Eric and Brad had gotten Bob’s budget approval and were faxing out purchase orders.

The rest of the virtual virtualization case study
Introduction: The Fergenschmeir case study
Stage 1: Determining a rationale
Stage 2: Doing a reality check
Stage 3: Planning around capacity
Stage 5: Deploying the virtualized servers
Stage 6: Learning from the experience

From CIO: 8 Free Online Courses to Grow Your Tech Skills
Join the discussion
Be the first to comment on this article. Our Commenting Policies