Managing transportation logistics is all about handling scale. As transportation management services firm Transplace added consumer goods companies such as Del Monte, Office Depot, Home Depot, Auto Zone, and DirecTV as customers, it needed to quickly bring server capacity online. Already planning a hardware refresh to support continued growth, CTO Vince Biddlecombe decided to bring in server virtualization at the same time so that he'd have a more scalable, flexible platform for that anticipated growth.
Biddlecombe uses EMC VMware on his Sun servers mainly to support Oracle databases and J2EE applications on BEA's WebLogic Web server. He liked the fact that the technology was operating system- and hardware-agnostic, as his datacenter supports a range of technologies. And he found the virtualization approach to be less complex than, for example, running his Oracle databases on an IBM Power6-based System p 570 server, which he had considered initially. "We have all 12 instances sit on two logical partitions. We can configure the CPU and memory for each, and share when they are not busy," Biddlecombe says.
But adopting virtualization required a deeper investment in datacenter hardware, Biddlecombe discovered, because virtualization changes the fundamental character of what is being supported. "You look at it as a pool of memory and CPUs. You just drop a couple VMs to increase capacity," he says. But treating servers as instances of capacity from a large pool of resources means that pool has to be kept available and running well.
To do so, Biddlecombe discovered he needed to beef up the network I/O capabilities of the physical servers to avoid network traffic contention across the VMs sharing those ports. "We put more NICs in to them, since there is more network activity going on," he says.
He also discovered he needed to beef up his SAN storage to support all the VMs that began to be created and to provide sufficient failover capacity in case of failure. "You can very quickly suck up 500GB or 1TB of storage for the VMs," Biddlecombe says. Although you no longer need storage in the servers themselves, you need more storage in the SAN, and that storage tends to be costlier. The more fine-grained your servers, the more storage you need, Biddlecombe notes. He likes to have more, smaller virtual servers so that each has a specific purpose that lets it be added or deleted easily without worrying about other applications that could be affected. But with 20GB of overhead for each virtual server, that approach can quickly eat up storage.
Altogether, the use of virtualization roughly doubles the cost of a physical server, Biddlecombe says. He spends about $12,000 for a Dell server with extra NICs, plus $6,000 for VMware, and $5,000 for the associated enterprise storage. Because he had already planned to replace all his servers, there was no additional sticker shock from the fact that all physical servers needed CPUs from the same family, but most enterprises would require a complete server refresh. "You need to commit to building that infrastructure," he says.
Biddlecombe also notes that the CPU-family restriction will limit how he can deploy virtualization in the future, and force him to forgo incremental technology improvements until the next server refresh. He's already been forced to do so: "We'll stick with our Intel servers even though the AMD [Barcelona-class Opteron] servers look to be better for memory management." One way he will work around that limitation in the future is to put physical servers into one of two clusters, one inside the DMZ and one outside. That way, he can stage his hardware refreshes by cluster, rather than do everything at once.
Ultimately, Biddlecombe sees the use of virtualization as cost-neutral when it comes to the hardware side. "But you get a large savings in administration," he says. He has about 120 virtual servers spread out over 30 servers, administered by just two people.