Boston-based ad agency Arnold Worldwide virtualized most of its servers five years ago. Chris Elam, senior systems engineer, remembers when he first started doing backups and noticed that throughput to the backups was dropping and that backup times were growing. But visibility tools on the firm's Dell Compellent SAN alerted Elam to the problem. He added more drives to increase I/O operations per second, and Compellent now spreads the data among the drives.
As an extra precaution, Arnold Worldwide's IT staff set most replications to take place during off-hours, except for those involving its production file servers, which it replicates during the day because data changes constantly. "That's an I/O hit we are willing to take," Elam says, adding that customer service is most important. "It's one thing if backups take longer; it's another thing if users start to complain [about slow systems]."
Performance is another important consideration in the I/O equation. "It's really important that administrators start to think about the I/O density and performance they need given the amount of infrastructure they have," Boles says. "Workload density has massively increased in the data center. Now you have 30 workloads in a single rack [running virtual servers]."
I/O density can be increased through the use of solid-state drives and similar technologies, more effective caching, or auto-tiering. Also, I/O will only increase as the enterprise adds more servers within a single storage system. Scale-out technologies can help scale performance as well as capacity.
"Small and medium-size business customers can look at [tools from] Scale Computing, for example. The midrange customer could look at EqualLogic, and the enterprise could look at NetApp and 3Par," Boles says.
2. More complicated data backup and disaster recovery
More than a quarter (27 percent) of the respondents in the Computerworld poll said that server virtualization has complicated backup and disaster recovery.
One of the biggest mistakes here is trying to protect a virtual infrastructure with traditional backup methods, according to Boles. With traditional backup, "The degradation of backup performance is more than a linear degradation as you scale the number of virtual machines on a piece of hardware. You're effectively creating a blender for backup contention as you're trying to protect these virtual servers overnight. You try to do 10 backups simultaneously on this one physical server, and you've got a lot of combat going on inside that server for memory, CPU, network, and storage," he says.
Complicating matters are workload mobility tools, such as VMware's Storage vMotion, that let users relocate virtual machine disk files between and across shared storage locations. "Now you have to keep a backup going in relation to these virtual servers that are going to be moving around, and possibly run into other bottlenecks. That can be a serious headache," says Boles.
The virtual desktop I/O dilemma
The virtual desktop I/O workload is tremendously punishing on a hard disk array. For starters, despite the traditional I/O workload of an individual workstation being sequential in nature, many IT departments are running thousands of virtual desktops on a single storage platform, which creates the I/O "blender effect."
"They're all doing sequential I/O in different regions of the disk, which turns those easy-to-service sequential I/O patterns into a nasty, random I/O pattern as far as the array is concerned," explains James Candelaria, CTO at WhipTail Technologies, a maker of solid-state storage arrays.