Stage 3: Planning around capacity
After testing the server virtualization software to understand whether and where it met their performance requirements, Fergenschmeir’s IT leaders then had to do the detailed deployment planning. Infrastructure manager Eric Brown and CTO Brad Richter had two basic questions to answer in the planning: first, what server roles did they want to have; second, what could they virtualize?
[ Start at the beginning of Fergenschmeir's server virtualization journey ]
Brad started the process by asking his teams to provide him with a list of every server-based application and the servers that they were installed on. From this, Eric developed a dependency tree that showed which servers and applications depended upon each other.
Assessing server roles
As the dependency tree was fleshed out, it became clear to Eric that they wouldn’t want to retain the same application-to-server assignments they had been using. Out of the 60 or so servers in the datacenter, four of them were directly responsible for the continued operation of about 20 applications. This was mostly due to a few SQL database servers that had been used as dumping grounds for the databases of many different applications, sometimes forcing an application to use a newer or older version of SQL than it supported.
Furthermore, there were risky dependencies in place. For example, five important applications were installed on the same server. Conversely, Eric and Brad discovered significant inefficiencies, such as five servers all being used redundantly for departmental file sharing.
Eric decided that the virtualized deployment needed to avoid these flaws, so the new architecture had to eliminate unnecessary redundancy while also distributing mission-critical apps across physical servers to minimize the risks of any server failures. That meant a jump from 60 servers to 72 and a commensurate increase in server licenses.
Determining virtualization candidates
With the architecture now determined, Eric had to figure out what could be deployed through virtualization and what should stay physical. Figuring out the answer to this was more difficult than he initially expected.
One key question was the load for each server, a key determinant of how many physical virtualization hosts would be needed. It was obvious that it made no sense to virtualize an application load that was making full use of its hardware platform. The initial testing showed that the VMware hypervisor ate up about 10 percent of a host server’s raw performance, so the real capacity of any virtualized host was 90 percent of its dedicated, unvirtualized counterpart. Any application whose utilization was above 90 percent would likely see performance degradation, as well as have no potential for server consolidation.
But getting those utilization figures was not easy. Using Perfmon on a Windows box, or a tool like SAR on a Linux box, could easily show how busy a given server was within its own microcosm, but it wasn’t as easy to express how that microcosm related to another.