The evolution of tech is all about replacing manual tasks with automation, which enables you to perform tasks that were impossible or impractical before. You're suddenly freed from mind-numbing manual configuration -- only to face a fresh load of complexity, as you confront shiny new buttons to push.
That's what has happened to storage. Today we have a vast array of options for host-to-storage connectivity -- each with their own pro/con list and constantly changing best practices -- and an equally wide number of on-array performance and capacity optimization software features. While deploying new storage may take a fraction of the time, knowing how best to deploy it in the first place often requires a lot more thought.
To get the most out of modern enterprise storage, at a minimum you need to stay on top of four areas: monitoring, benchmarking, application characteristics, and a shortlist of general best practices. Some of them are as old as storage itself, while others come courtesy of the increasing complexity to be found in modern enterprise storage tech.
The No. 1 item any storage admin absolutely needs to know is how to monitor storage. Monitoring both capacity and performance will allow you to foresee impending problems before they rise to the level that anyone will notice -- and give you a chance to make necessary configuration changes or put more hardware on order before it's too late. Likewise, monitoring also gives you the data to determine whether or not storage is the cause of an application performance problem.
Storage admins don't need to become DBAs or experts in any other type of application. But sooner or later, ignorance of application storage characteristics will cost you. At the very least, before you throw hardware at the problem, have a serious talk with the apps guys to find out how the application in question uses storage.
Staying on top of best practices
Believe it or not, among the multitude of ways to configure hosts and storage to work together, there is almost always a right way to do things. Unfortunately, best practices vary widely depending on what you're trying to achieve. This goes double for implementing storage in virtualized environments, especially in situations where your storage supports more than one storage protocol and/or where you're replicating storage to a remote site for failover.
For example, let's say your storage platform supports NFS, iSCSI, and Fibre Channel or FCoE and you're attaching to virtualization hosts that can also support all three of those protocols. Which do you use? Is there an advantage to mixing and matching for different workloads? Will one type of workload benefit from running on a capacity-efficient file-level protocol, while another might benefit from running on a block-level protocol? Will configuring a certain application with a block-level protocol allow you to use application-aware snapshotting software from your storage vendor while a file-level protocol may not?
If you're replicating to a remote site, will the choice of block- vs. file-level protocols have an impact on your ability to fail over or fail back? Are there caveats in the version of the hypervisor, replication software, SAN firmware, or failover automation software that would make one better than another?
It can take a lot of research to figure out the right answers to these questions and set everything up in the most bulletproof way possible. Worse, once you do, a version upgrade of any single component may suddenly change the answer or simply break something. That makes finding accurate sources of information very important -- whether from documentation or human experts.