Vendors offer three approaches to true storage virtualization: host-client (via software), in-fabric (mainly through appliances but soon also through switches), and in-array (embedded functionality). While the vendors tout specific pros and cons of each approach (see the infographic on page 39), analysts agree that they all deliver in the end. The determining factor is usually how well a particular approach fits into your existing storage infrastructure, says Gartner research director Stan Zaffos.
The in-fabric approach is the most common method, offered in products such as DataCore Software’s SANsymphony, EMC’s InVista, FalconStor’s IPStor, IBM’s SVC (SAN Volume Controller), NetApp’s V-Series, and StoreAge’s SVM. These products, which have been on the market for just a few years, use dedicated appliances or software running on a server to discover storage resources and then build the metadata that lets IT manage them as a virtual pool. Of these, IBM and NetApp have the largest installed bases (about 1,000 each), Zaffos notes.
Coming soon are switch-based products -- often deployed as blades within fabric switches -- that essentially do the same thing as a separate appliance. These will be from companies like Brocade, Cisco Systems, MaXXan Systems, McData, and QLogic. By putting the virtualization functionality in the switch, the theory is that operations are more efficient because data travels through one fewer device than if it also went through an appliance, notes ESG’s Garrett. He expects most of these ultimately to run a version of Symantec’s Veritas Storage Foundation host-client software, although Symantec says it has no immediate product plans to port its software to run on switches.
Storage Foundation has been around in various versions for a decade, running on file and application servers to detect storage resources and maintain the metadata used to manage them. Until recently, Veritas (now a division of Symantec) did not release its Unix and Windows versions in sync, so it was hard to use Storage Foundation in heterogeneous environments, Gartner’s Zaffos says. Still, he says, the technology is easy to use for many purposes, including data migration, load balancing, and flexible provisioning.
A third type of storage virtualization is exemplified by the TagmaStore network controller, from Hitachi Data Systems, which lets HDS’s management software work with multiple vendors’ storage systems as if they were one pool. Approximately 45 percent of the roughly 1,700 current TagmaStore customers implement its virtualization technology, says Claus Mikkelsen, HDS’s chief scientist. Its key benefit, according to Zaffos, is that “you’re not adding another element in the
I/O path, so you’re not buying another asset.” Because it’s usually cheaper to replace storage arrays than to pay for their annual maintenance, Zaffos expects TagmaStores to be used mainly to ease migration from old arrays.
Pricing for all three strategies is fairly equivalent, though that’s not immediately evident when comparing, say, a TagmaStore controller with a NetApp appliance or a Symantec software license. “The pricing variables are driven more by scale,” says IDC’s Villars. “For example, a TagmaStore is more expensive than IBM’s SVC, but you’re buying more.”
One potential gotcha is that while virtualization promotes the idea of cross-vendor storage utilization, all three strategies also enforce vendor lock-in. In-array products obviously lock you into a specific vendor’s array hardware, says Mark Lewis, EMC’s chief development officer, but in-fabric and host-client products lock you into the virtualization software or the appliance that embeds that software.