Virtualization is an easy sell: Who wouldn't want to turn underutilized physical servers into a humming little farm of virtual servers you can spin up or down at the drop of a hat? The dirty little secret of virtualization, however, is that to maximize effectiveness, you're usually best advised to deploy on new infrastructure specced for the purpose.
Whether you're looking for a single host server or heading into a fully virtualized infrastructure, a few rough guidelines can help ensure you buy no more or less than you need.
[ When it comes to server virtualization, Paul Venezia has supernatural abilities. Download his "Server Virtualization Deep Dive" PDF guide. ]
The more cores, the better
When you shop for any server, your buying decision usually begins with choosing CPUs. For virtualization hosts, the number of cores trumps the speed of each core almost every time. In many cases, you'd be stunned to know how many virtual servers you can squeeze onto a box running 1.7GHz cores as long as there are plenty of cores to be had.
If you have the budget to outfit your boxes with 2.93GHz Westmere chips, then by all means go for it. But you can get plenty of bang out of AMD Opteron 4000-series CPUs running anywhere from 1.7GHz to 2.2GHz per core at 6 cores per CPU. A few servers with two of those processors can take a medium-size virtualization framework surprisingly far.
The age-old axiom "the faster the CPU, the faster the server" holds true mainly for single-threaded, compute-intensive tasks. In normal server operations, CPUs often stay nearly idle for a significant portion of their operating cycles, and even when they're tasked, slowdowns in other subsystems can cause speedy CPUs to wait while data is being retrieved from disk, RAM, or the network. If the choice is between a 6-, 8-, or 12-core CPU at a lower clock speed and a 4- or 6-core CPU at a faster clock speed, always go with the higher core count.
Maxing out memory
When you price out virtualization hosts, pack as much RAM as you can afford into them. The amount of RAM is the biggest limiting factor in how many virtual servers you can run. Packing 64GB of RAM or more into a server with 12, 16, or 24 cores makes an awful lot of sense, even though RAM pricing jumps at the higher densities.
Yes, those 4GB and 8GB DIMMs are much more expensive than a pile of 2GB DIMMs, but you don't want to be forced to buy another physical server to just to distribute a RAM load. Then you not only to shell out for the new server, you need to shell out for additional licenses.
The flip side of that advice is that you should always have enough physical servers to survive the loss of a single server -- and ideally, the loss of several physical servers if the implementation is large enough. While modern servers are proving less likely to catch fire, it does still happen, and you need to be prepared in case of catastrophe.
You also absolutely need a suitable safety net for routine maintenance. If you cannot take a physical host offline for 15 minutes to replace a failed DIMM because the remaining servers cannot adequately handle the RAM or processing load caused by the loss of that server, you're in trouble, and you're actually losing out on one of the prime benefits of server virtualization: a reduction in scheduled downtime. When you drop a physical server for maintenance, you want to avoid having to power down some number of virtual servers in order to decrease the overall load. Bad idea! While running N+1 is a minimum, expanding beyond that is even better.
Any realistic virtualization platform should be built on shared storage. Without this, each server is essentially a silo, and the VMs running on those siloed servers cannot be protected against physical server failure. Plus, building and expanding the virtualized infrastructure gets harder and more tedious without shared storage. In fact, unless we're talking about a very, very small virtualization build, the use of shared storage is not an option -- it's a hard and fast rule.
To that end, make sure that your shared storage solution is as robust as possible. Whether you plan on using iSCSI, NFS, or Fibre Channel, take a good look at your disk I/O needs before you start buying switches, HBAs, and disk. In many cases, SATA drives are more than adequate for general-purpose server virtualization, and in some cases, NFS will outperform iSCSI for day-to-day computing needs. This may lead you in a different direction than your storage vendor wants you to go, but unless you're talking about a heavy transactional disk workload, you probably don't need to get the SSD or even SAS-based arrays to start with.
In fact, unless you're talking about pushing 10G to each server, the use of these speedier storage mechanisms may be pointless. And with the proliferation of cheap disk, don't stick with traditional RAID5; go with RAID6 or ideally RAID10 on your array. Yes, you'll be giving up space, but the performance and reliability of those choices make them worthwhile.
On the networking side of things, don't forget that it's far cheaper to aggregate multiple 1G copper links than it is to implement 10G, but 10G will give you monstrous growth potential. Just remember that it's simpler and possibly cheaper to upgrade those servers with 10G NICs later than trying to deal with a smaller number of wickedly fast servers that are overburdened with their virtual server loads. General-purpose virtual servers won't make much use of 10G for either normal service traffic or disk I/O, but highly transactional applications will, so try to find that balance based on your needs.
Last, remember that server virtualization condenses your infrastructure into fewer physical units, so the better equipped you are to deal with failure of any one of those components, the better off you are overall. And with all the money you'll be saving on power and cooling, adding that second storage array, and firing up replication just might fit into the budget after all -- and that can directly lead to fewer sleepless nights.
This story, "How to buy hardware for virtualization," was originally published at InfoWorld.com. Read more of Paul Venezia's The Deep End blog at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.