Would you serve the same food to guests at your daughter’s wedding as you would at your four-year-old son’s birthday party? Unless you’re rich or a terrible money manager, your answer is probably no. So why are many of today’s budget-strapped storage managers forced to allocate the same level of high performance storage, SAN bandwidth, and services to minor tasks that they use for mission-critical applications?
One reason is that the SAN devices, management, and virtualization tools companies use to pool and share among applications have no awareness of those applications’ differing needs. The result is often wasteful overkill and overspending.
“In order to make sure the elements are there to meet the capacity, performance, availability, and recoverability requirements of their most important data and applications, companies often have to overimplement,” says Mike Koclanes, co-founder and CTO of storage management vendor CreekPath. “The result is that they hit
their required service level, but the costs are much too high.”
Hubert Yoshida, CTO of Hitachi Data Systems (HDS), agrees, placing some of the blame on his fellow storage vendors. “Until now, storage vendors have worked from the bottom up because we own the storage. But ultimately, the storage is there to serve an application. The time has now come to take a look from the application down.”
Approaching storage from the application’s point of view is exactly what ADIC, AppIQ, CreekPath, HDS, IBM, Maranti Networks, OuterBay, Veritas, and other storage management and hardware vendors have started doing. Maranti Networks calls this new approach “application-aware storage,” while HDS calls it “application-optimized storage” and AppIQ calls it “application-driven storage.” But each vendor has the same basic goal: to tailor storage and storage services to individual applications based on their particular performance, availability, recoverability, compliance requirements, and value to the organization.
One benefit of this approach is the cost reduction it imparts to storage managers who may then take better advantage of less-expensive SCSI and Serial ATA storage and services for low-priority applications. Meanwhile, they maintain the highest levels of performance, capacity, and reliability for e-commerce and other mission-critical apps all using the same storage pool.
Application-aware storage shares something in common with an earlier concept, known as HSM (Hierarchical Storage Management). But while HSM migrates data among storage tiers, HSM systems’ actions are mostly based on the age of the data and frequency of access, and take little account of compliance requirements or the data’s actual value to the organization.
By approaching storage intelligently, application awareness can provide a foundation for ILM (information lifecycle management), a strategy that’s widely trumpeted as the future of storage. ILM aims to define a set of policies and automated processes for provisioning, mirroring, replication, snapshotting, data migration, and retention based on the value of different types of information to the organization. ILM goes beyond the application, allowing policies based on awareness of data itself and even the information that data represents.
Visibility and Beyond
As is true with most new technologies, the solutions for application awareness today are fragmented and take a number of different approaches. The approach you take, if you decide to dive in at such an early stage, depends on the level of automation and standardization you’re looking for and the types of applications you want to serve.
CreekPath and AppIQ take a standards-based approach, but concentrate more on visibility than automation. They argue that you can’t do true application-centric storage or ILM without knowing exactly how your applications interact with storage and the SAN, and lots of storage managers are just not ready to cede control to an automated solution. If application visibility across a heterogeneous SAN is a top goal, however, these can be excellent solutions.
CreekPath’s Koclanes says, “The key is to discover and understand each application’s entire storage supply chain … Then you can monitor that application and … work out inefficiencies and get some of the cost out of those service levels. Or you can see that you’re not as able to get the right performance out of the app as you thought and have to configure things differently.”
AppIQ takes a similar approach. “With AppIQ you can see exactly what applications will be impacted if an individual switch or array goes offline,” says Tom Rose, the company’s vice president of marketing.
AppIQ and CreekPath offer modules for specific applications — such as Oracle, Microsoft Exchange, and Sybase — in addition to an overall storage management platform. Both also provide some amount of automated provisioning based on service level policies. But their real strength is visibility.
If you’re willing to take the leap into an automated approach, products from ADIC, IBM, and Maranti Networks allow storage administrators to divide storage into tiers based on performance, reliability, and availability features, such as mirroring and fail-over. Applications or groups of applications can be assigned service levels that are matched to the appropriate storage tiers. Provisioning can then be automated based on application, and administrators can implement policy-based data replication and migration across storage tiers. ADIC and Maranti products also allow SAN bandwidth to be reserved for real-time and business-critical applications during periods of fabric congestion.
Aside from handling different functions, vendors also approach application awareness from different parts of what CreekPath calls the “storage supply chain.” Most of the above solutions are primarily host-based, but Maranti provides its services at the fabric and port level through its CoreSTOR network storage controllers. Proponents of fabric-centered services applaud these products’ fast performance and SAN visibility, but obviously you need to buy Maranti hardware to make them work.
HDS combines its host-based HiCommand storage management product, which uses AppIQ technology, with its TagmaStore Universal Storage Platform, a robust, high-performance array that can also provide virtualization and services to storage at the array level.
EMC has taken still another tack with its acquisition of Documentum, a file-based content management solution for handling unstructured content from a variety of applications. Documentum facilitates application of service levels and migration policies to file-based data at a very granular level, which is valuable for meeting compliance requirements.
Metadata is yet another piece of the puzzle. ADIC, EMC, IBM, Maranti, and other vendors use metadata and metadata servers as part of their solutions as a means to aggregate application-aware storage data.
“Let’s say I have a piece of data in three places: primary storage, replicated in your hot backup location, and on tape somewhere in a vault,” says Ray Dunn, industry standards manager in the network storage division of Sun Microsystems and chair of the Storage Network Industry Association (SNIA) Storage Management Forum. “[The metadata repository] knows all three and can help you find the data when you need it. It could also say the data needs a ‘platinum’ set of services applied to it. In order for this to work well, you have to have the whole SAN — the server, HBA, anything in the fabric, the storage device, and any virtualization objects — communicating and interoperable.”
The future of metadata management for application-aware storage systems may well be object-based storage. In this model, a metadata server acts as a repository and traffic cop but the metadata itself actually follows the data across the SAN as part of a data “object.” Ideally, the business application itself would also communicate with the rest of the SAN. “The applications have to tell us what they need,” HDS’s Yoshida says.
This broad universe of application-aware storage solutions can be confusing, but it typifies the early stages of any new technology. Most vendors and analysts agree that application-aware storage would be most effective when applied across a heterogeneous SAN and to the entire data lifecycle, from creation to provisioning, migration, and disposal.
The most obvious place to start is with standards, which would provide a virtualization layer for developers to tap services without having to write to all the individual storage vendor APIs. SNIA has started the ball rolling with its SMI-S (Storage Management Initiative Specification), which covers discovery, monitoring, and some provisioning functions, but seeks to encompass services and ILM (see “SMI-S: Order from chaos”).
However, even with standards in place, setting policies and service levels cannot be done in an IT vacuum. It requires planning, consulting, and cooperation among application owners, storage managers, and the business units whose business processes are affected — not an easy task.
Widespread adoption of these ideas will take several years, if it happens at all, but the seeds of application-aware storage exist today. The standards are still emerging and a vendor shakeout is inevitable. Depending on your needs, one of these solutions may help you increase efficiencies and drive down costs in your infrastructure while assuring the service levels your applications require.