Due to the enormous cost of selecting and migrating to a completely new primary storage infrastructure, most organizations try to wring every last drop of functionality out of their storage resources. That's one reason why most storage deployments are viewed as five-year investments.
Yet with corporate data growing at geometric rates, the notion of deploying a platform that can scale out for such a long time -- not to mention the idea you can plan that far into the future accurately -- is becoming a joke. Many "long-term" storage investments have hit the wall much earlier than anticipated, incurring uncomfortable trips to the corner office. Hey, didn't you say those big, expensive hunks of hardware were going to last?
Face it -- upgrading your storage infrastructure is going to happen more often than you'd like. But at least server virtualization has dramatically decreased the pain involved in making a midstream migration from one storage platform to another or of running more than one system in parallel. The truth is your ability to predict your future needs is more difficult than ever. In fact, you'll probably be wrong -- and that's OK.
The old approach
Most enterprise SANs are built around a controller and disk-shelf architecture. Typically, the controller resources are sized based on the total amount of host I/O required on the front end and the amount of disk resources addressed on the back end.
In these types of platforms, an unexpected spike in the number of disk resources required might mean replacing the controllers while continuing to leverage the same disk. Fortunately, most vendors that use this kind of architecture make controller upgrades relatively easy -- sometimes not even incurring downtime.
There are two major problems with this kind of approach. First, if you upgrade the controllers during year three of a five-year investment, you've tied new controllers to disk resources that are already more than halfway through their expected lifetime -- effectively making those brand-new controllers a rather expensive Band-Aid.
Second, in response to rapidly shifting storage requirements, storage technology itself is changing in massive leaps and bounds, with SAS quickly replacing Fibre Channel as a back-end disk architecture and more advanced software features that leverage solid-state drives becoming commonplace. It's almost a given that three years after you buy a storage platform, the latest advances in disk and controller technology will result in the next generation bearing little resemblance to your previous implementation. By continuing to invest in a three-year-old architecture, you can't take advantage of those new advances.
If this approach has serious drawbacks, why do it? Because, in the past, the idea of migrating to an entirely new storage platform usually represented a massive undertaking. Not only would administrators need to learn the ins and outs of the new platform, but they'd also have to deal with the often manual process of migrating systems and data from the old platform to the new -- requiring late nights and significant downtime.