The popularity of VDI (virtual desktop infrastructure) is growing as enterprises both large and small start to realize the operational and environmental benefits it can offer. Centralizing the desktop operating environment in the data center, though, can lead to unexpected challenges -- such as a wallop of transactional stress on expensive centralized network storage.
Failure to plan and test for the VDI load can result in poor performance and uncomfortable budget overruns. Here's a quick guide to provisioning the network storage necessary to do VDI right.
Before you start with any form of VDI deployment -- even a pilot program -- be absolutely sure to define a complete set of requirements. This will be vital as you deploy a pilot and start pushing it into production. (Most critically, do not allow the feature set of any particular VDI solution to drive your requirements.) If nothing else, your requirements list may serve as a litmus test that indicates VDI isn't a good solution for your users. If that happens, don't try to make the requirements fit the technology -- wait until the technology fits the requirements.
During your pilot phase, there are a lot of metrics you'll want to keep an eye on, so don't skimp on data collection. Aside from the obvious ones such as CPU and memory utilization (most hypervisors make these easy to track), you'll want to keep a very close eye on the amount of transactional storage resources are being used by your pilot users. This information can be gathered in a number of different ways: from within the guest operating system through Windows-based tools like perfmon, sometimes through the hypervisor, and most reliably, from whatever SAN management tools your storage vendor offers.
Reviewing storage requirements
If your current environment includes desktop systems with large storage requirements -- say, for rich media -- a prospective switch to VDI may cause you to obsess over exactly how much network storage capacity your VDI deployment may require. Fair enough -- but don't lose sight of the transactional load, which is what will end up costing real money. Network storage capacity is fairly cheap, while IOPS (I/Os per second) are not, especially when combined with high-capacity requirements.
For example, let's say you're designing for a base of 500 users and want to gauge how much centralized storage will cost. If you're able to deploy using linked clones or other such space-saving technology, you might need around 2.5TB of disk space. If you can't, you might need as much as 7.5TB or more depending upon what operating system you're planning to use (keep in mind that a Windows 7 deployment will eat more than twice the disk space of a Windows XP deployment).
Even given a worst-case capacity scenario, you may be tempted to go for an inexpensive shelf of high-capacity SATA disk. But if the data you collect from your pilot environment shows that your VDI users are each burning an average of 5 IOPS daily -- yielding an average of around 2,500 IOPS for a full deployment -- that shelf of SATA will be woefully inadequate for the load. Instead, you may need a few shelves of higher-speed SAS disk to grant you both the capacity and transactional throughput you'll require.