Where Virtualization Works Today
ESG’s Garrett says his research shows that storage virtualization applied to storage environments with at least six storage fabrics reduces costs in several areas: Hardware costs drop 23.8 percent, on average; software costs drop 16.2 percent; and administration costs drop 19.3 percent.
Once an enterprise has deployed storage virtualization, the technology is “relatively easy to use,” Garrett says. The real effort lies in getting up and running. So Garrett recommends that IT focus on a specific tactical issue, such as getting nondisruptive data migration in place. If you apply storage virtualization to that specific issue, he says, “then you can extend into the other stuff as you get more experienced.”
That’s exactly the approach taken at the Baylor College of Medicine, in Houston, Texas. Two years ago, the college decided to integrate dozens of file servers and ERP stores attached to Unix and Windows servers in order to reduce unused storage capacity and lower administration costs. Despite the initial expense, Baylor decided to replace its storage devices with a single FC (Fibre Channel) storage fabric and a set of HDS arrays, recalls Mike Layton, director of IT for enterprise services and mainframe systems. Not having a heterogeneous environment to support -- “a luxury,” he says -- made the decision to deploy storage virtualization fairly safe.
Today, the Baylor system manages 200TB of data, including patient records and university operations data. HDS hadn’t yet released its TagmaStore array, so Layton deployed NetApp V-Series appliances instead. Baylor’s use of storage virtualization is mainly to pool storage resources, although the college is also considering how to use the technology to implement data lifecycle management, where patient data can be highly available during treatment but later moved to lesser systems for analysis, auditing, or other needs.
Dallas-Fort Worth International Airport had a different problem. It stored flight data (such as passenger lists, arrival times, baggage tracking, and gate information) in two SANs using Oracle RAC (Real Application Clusters). Oracle RAC could treat one storage target as the primary target and then replicate to secondary systems, but this process simply took too long, recalls John Parrish, associate vice president of terminal technology. If one terminal’s SAN goes down, the other SAN has to step in immediately so flight boarding and baggage handling isn’t delayed. DataCore’s SANsymphony appliance made Oracle RAC think it was working with just one SAN, and Parrish has seen no latency issues crop up in this deployment.
Replication issues were also a problem for Freeze.com. The online retailer needed to keep its 400GB Microsoft SQL Server transaction databases in sync with its reporting databases, but SQL’s resource requirements prevented the reporting tools from working on the same database as the transaction management, recalls Freeze.com IT director Kyle Ohme. He would mirror the database periodically, but replication took so long that the reporting database was hours behind, preventing the kind of analysis needed to manage supplies properly. Ohme deployed tools from FalconStor to pool the storage into a virtual volume so both sets of applications can access it in real time. That way, he could send snapshots of the transaction database to the reporting tools, rather than replicate the entire thing.
A Long-Term Effort