Will virtual desktop infrastructure (VDI) finally grant IT the control over user desktops that many crave? The previous attempt -- thin clients -- to put the PC genie back in the bottle and give IT administrators full control over users' desktops didn't work out, as the cost was the same as managing regular PCs. Also, thin client technologies such Citrix Metaframe and Microsoft Terminal Services simply were just not up to the task of providing hundreds of users with an acceptable desktop experience.
VDI promises to overcome the weaknesses of thin client computing by combining virtualization with remote computing technologies, so users get their normal desktop experience, while application incompatibilities, lack of customized user experiences, and reliability issues go away.
[ Get the straight scoop from InfoWorld's J. Peter Bruzzese on the three kinds of VDI clients. | Learn what desktop virtualization really means in InfoWorld's special report. | Download InfoWorld's "VDI Deep Dive" PDF report today. ]
But VDI is hardly new, and so far it has not lived up to that potential. Part of its slow uptake is that many IT administrators are a little gun-shy as a result of the thin client experience. After all, no administrator wants to introduce another technology that will not deliver on its promises of lower costs, easier management, and an acceptable end-user computing experience.
VDI does have one distinct advantage: The technology is real, a fact proven by the thousands of deployments that have already occurred. However, many of those pilot programs and deployments have not replaced the users' desktops; instead, most VDI deployments have been used for internal prototyping, testing, and validation chores by IT staffers themselves. That's created a perception among many IT admins that VDI is not ready for prime time.
Is that a mistaken judgment? Here's what IT needs to know about VDI technology today to decide if the promise is real, and if so whether it's worth the cost.
VDI in the real world: No off-the-shelf, simple answer
Determining the viability of VDI is a complicated task, simply because of the number of products available, the multitude of usage scenarios, and the heterogeneous nature of the software and equipment involved to create a VDI deployment.
Regrettably, VDI is not available as an off-the-shelf solution. You'll likely have to integrate several products from several vendors, and each has its own nuances. Further complicating the decision process is VDI's availability in an almost Baskin-Robbins-like variety of flavors.
I have deployed several VDI solutions in test scenarios and experienced firsthand some of the integration challenges presented by today's technologies, the first of which is determining the how well technologies work together to build a workable solution. That means you'll have to carefully research the various components before combining them.
That said, there are still common, necessary components needed for a VDI deployment, including:
- A virtualization platform (such as Microsoft's Hyper-V or EMC VMware's ESX server)
- A communications protocol (such as RDP or ICA)
- A virtual management platform to provision and manage pools of virtual machines
- A session broker to assign users to VMs and maintain connections
- A client device (such as a thin client, a zero client, a PC running a thin client, or a PC running a compatible browser)
What deters most administrators is the complex ecosystem of VDI elements, which often come from a variety of vendors. That adds to the overall complexity of deploying and managing a VDI solution. Administrators also have the option of adding other components, such as application virtualization (which speeds the deployment of applications to virtual machines), as well as profile and data redirection technologies (which help to synchronize sessions and redirect users to the proper virtual machines if a session is interrupted).
I've found that application virtualization products, such as VMware ThinApp, can simplify provisioning of new virtual desktops by autoinstalling line-of-business applications for the user.
Further muddying the waters is the fact that virtual machines come in two flavors: a Type 1 hypervisor and a Type 2 hypervisor. A Type 1 VM runs the hypervisor as the actual machine's operating system; in other words, the hypervisor is loaded as part of the software boot process, then the virtual machine launches to run its virtualized desktop, including the desktop OS. A Type 2 virtual machine runs as an application installed onto the desktop's native operating system, so another layer of software has been added to the mix. PCs and servers equipped with the latest virtualization-aware CPUs can run Type 1 virtualization, as well as Type 2 if a compatible operating system is installed. A good example of a Type 1 virtualization product is VMware's ESX server, and Microsoft Virtual PC is a good example of a Type 2 hypervisor.
Thin clients and zero clients cannot run hypervisors at all; they lack the native processing power and hardware. Thus, they rely completely on a server to physically run any software or applications.
The endpoint is critical to your VDI strategy
Although VDI is all about moving the desktop back into the data center, the endpoint still plays a significant role in determining how to deploy VDI. Before wandering down the path to VDI nirvana, administrators need to do a little due diligence.
VDI's complexity doesn't end with just the types of endpoints supported, which include thin clients, zero clients, PCs running a thin client, and PCs running a compatible browser. It also often has to support both connected and disconnected clients, as well as work with remote clients whose connection speeds and quality can vary greatly.
However, I question the need to include support for disconnected clients, simply because if you are in a situation where you do not have persistent access to the corporate servers, you no longer have access to corporate databases and client/server application. In that case, it makes little sense to offer a virtual desktop, as productivity is bound to take a significant hit. Users who work in environments without persistent connections are better served by traditional portable computers, with local operating systems and applications.
VDI can be delivered to an endpoint in two fashions:
- Via a persistent connection, where all activity takes place in the data center and just I/O is sent and received from the endpoint
- By delivering a virtual desktop to the endpoint, which runs locally and is then synchronized with a virtual hard drive stored in the data center, often called offline or disconnected mode; a PC with a virtualization-aware processor is required
Determining what endpoints to support and whether to support disconnected devices is the most critical decision that an IT administrator makes. Those choices determine the VDI strategy that needs to be put in place. I've found that the more traditional deployment of running the virtual desktop in the data center and only having to transmit I/O to the endpoint is easier to deploy and manage. It's also the only method that will work with thin clients or zero clients.
In my experience, VDI that supports disconnected users is vastly more complex to configure, deploy, and manage than VDI that uses a persistent connection. Some of the challenges for supporting disconnected users include the following issues:
- Validating the connecting user
- Validating the hardware (and software) capabilities of the endpoint
- Providing the mechanism to deliver the virtual hard drive to the remote endpoint
- Providing the mechanism to deliver a hypervisor to the remote endpoint
- Managing the client itself
- Managing the active virtual session
- Synchronizing the virtual hard drive between the endpoint and the data center
- Supporting disconnected applications (client/server versus local applications)
- Securing the endpoint and the active virtual desktop
By contrast, what you have to do for a persistently connected endpoint is much less:
- Validating the user
- Validating and securing the connection
- Validating the endpoint's software environment
- Validating or installing the thin client software
- Managing the connection
Note that for VDI to work effectively, data centers need to provide costly high-bandwidth, low-latency links to the client devices on their own networks and ensure that offsite users have similar-quality broadband or private network links.
If supporting disconnected users is required, that's a clue that VDI may be the wrong technology choice. It may be best to provide those users with a laptop, netbook, or tablet, and skip the VDI route completely. Perhaps the biggest problem for supporting disconnected users is related to time: how long it takes to provision the virtual desktop and to synchronize data between the client device due to the huge amounts of bandwidth needed to make the technology palatable. I've tested a few synchronization solutions over a fast asynchronous broadband connection (30Mbps upload and 10Mbps download), and I've seen that the initial provisioning of the virtual desktop to a client PC took upward of 20 minutes. That alone may be reason enough to skip disconnected VDI.
Caution: Difficulty ahead in several areas
Regardless of the path chosen, you should still expect to encounter some difficulties while implementing VDI.
Most of those difficulties are centered around the integration of the various components. For example, a certain version of a connection broker may not support a particular virtualization platform. Some products work only with specific hypervisors. Case in point: VMware View 4.0, which works only with VMware's own platforms. If you're looking to use Hyper-V as a virtualization platform, VMware View 4.0 is not an option.
Another common problem is troubleshooting display protocols and the associated network infrastructure. Display protocols, which encapsulate all I/O between the client device and the virtual machine, can be bandwidth-intensive and influenced by network latency. Tracking down the cause of those issues requires advanced network diagnostic tools and, in some cases, additional products to shape network traffic.
Other complications come in the form of troubleshooting user complaints related to performance and usability. Tools to diagnose such problems and fully support users are only just starting to come onto the market. Recently, I tested management products from SolarWinds and Ipswitch that eliminate many of the management issues; however, these products add significant costs to a VDI deployment.
Many of the VDI products on the market could be improved by incorporating troubleshooting and management capabilities directly into the offering and reducing the need for third-party applications.
One thing is certain: VDI places significant loads on your network infrastructure. If you have limited bandwidth and high-latency connections, problems with performance and reliability are sure to rear their ugly heads.
VDI's costs are coming down, just as better VDI tech appears
Beyond the technical challenges is an often unstated fact: VDI can be expensive, simply because of the high requirements for server and network resources. A November 2008 Forrester Research report estimated each VDI user would cost an organization $1,760 for the cost of the thin client software, server, storage, and licenses for virtualization software, desktop OS, and applications.
However, prices have dropped by about half since then, to $900 per user, says Natalie Lambert, the Forrester analyst who wrote the report (and is now a Citrix employee). As the technology continues to mature and prices decline, VDI should become economically and technologically viable for more and more businesses.
For example, Microsoft is changing its software license to simplify VM provisioning, VMware is set to launch a new version of VMware View, which will natively support offline modes. And Citrix is pushing further into the desktop virtualization space with new client hypervisors. Products such as MokaFive and Wanova have arrived on the market, and they allow administrators to fully manage, secure, update, and sync Type 2 client virtual machines on remote clients. Other companies, such as LeoStreme and Ericomm, are creating connection brokers that are hypervisor-agnostic, allowing administrators to combine various virtualization platforms to deliver VMs using the best technology for the particular need.
These advances have led to startups that offer cloud-based desktop provisioning (with the ungainly acronym DaaS, for "desktop as a service") to small businesses, effectively eliminating the need to purchase software, servers, and the other elements that make up a traditional small-business IT department. That same logic is starting to affect the enterprise culture, where CTOs are now examining the DaaS concept to bring fully managed virtual machines into the enterprise, eliminating much of the support and management concerns associated with traditional desktop PCs. Desktone also offer DaaS as a "private" cloud offering that IT can deploy in its own data center.
Several vendors are looking to provide preconfigured VDI systems. For example, PanoLogic $24,450 Pano Express is a 50-user turnkey VDI system with 50 Pano client devices, a high-performance server, 50 Microsoft Virtual Enterprise Centralized Desktop (VECD) licenses, and VMware vSphere Essentials, for a cost of $489 per client if all 50 seats are used. Citrix and thin client maker Wyse Technology have partnered to deliver a zero client device aimed squarely at delivering VDI as a simple-to-deploy-and-manage concept. Thin client maker NComputing is pursuing a similar path by partnering with VMware.
As such efforts continue, it is only a matter of time before enterprises can broadly embrace VDI technology. The question is when, not whether.
This article, "The unvarnished truth about VDI desktop virtualization," was originally published at InfoWorld.com. Follow the latest developments in virtualization and Windows at InfoWorld.com.