Years ago, the most common type of migration you'd be likely to see in a data center would be upgrading from one operating system version to another. That process would usually be tied to a server hardware refresh and possibly an application version upgrade. In the process, production data would inevitably be moved from one place to another -- but that migration wasn't the objective.
These days, with ever-growing mountains of data and the increasing popularity of server virtualization, it's much more common to see storage hardware upgrades that have nothing to do with OS upgrades. The act of virtualizing a physical server or providing more storage space has taken a front seat.
Straight-up data migrations look simple enough on paper: Large amounts of data need to get from point A to point B. Simple enough, right? Not exactly -- as with most things in IT, the devil is in the details.
The success (or failure) of these types of data migrations usually revolves around the amount of downtime they require. As such, minimizing that downtime and ensuring that your migration process finishes without complication within the time you have available is a key concern. Here, I'll dig through the most common data migration scenarios and outline some of the pitfalls you'll want to avoid.
The first consideration when presented with a requirement to migrate data from one type of storage to another is to identify the source and the destination. This will hugely influence what method you use to move the data, how risky it will be, and how much time it will take.
After you've performed any necessary storage switch zoning and storage provisioning on the SAN, you should be able to bring up the SAN new data volume on your server, mount it, and format it. A commonly forgotten step here is to make sure to align the new partition you create correctly -- don't forget that step as you can't adjust a volume not aligned properly at creation.
Now that you have a brand-new empty volume on the server, you can perform a premigration data copy. Many people will make the mistake of jumping right into a data migration downtime window and moving all of the data, soup to nuts. This is both unnecessary and risky. Chances are the data migration will take far longer than you might imagine, especially on file servers containing huge numbers of very small files. Instead, consider using a tool such as robocopy (Windows) or rsync (Linux, Windows) that is capable of mirroring one data volume to another.
If configured correctly, these tools can allow you to bring your SAN volume into sync with your production local volume without taking your production volume offline, including the copying of ever-important security descriptors. Running the copy repeatedly over a few days can update any modified files on the non-production SAN volume and remove any files that have been removed on the production volume since the last sweep -- all without interrupting production. Even if you're migrating a database server where performing an incremental copy isn't possible, performing some test copies ahead of time will allow you to accurately plan for how much time the migration will take.
Once you've brought the two volumes into sync, it's time to schedule a slightly longer downtime window to finish the migration. This time, you'll want to bring down any services on the server that might allow users to access their data -- you need to be absolutely sure no files are locked and that no data is being modified. On a Windows server, for example, shutting down the Windows Netlogon and Server services will effectively make it impossible for users to access shares on the server.
Tools like Microsoft's Process Explorer or the Linux
lsof command can be useful in determining whether there are still any processes running on the server that might lock files that you need to copy or prevent you from dismounting the local storage volume later on (on Windows servers, the WMI service is often an unexpected culprit).
Migrating bulk data from one SAN to another is in many ways very similar to moving data from local storage to SAN storage, but you'll want to closely consider a few differences before you begin.
In these kinds of migrations, I've often seen problems arising due to incompatibilities between different SAN vendors' multipathing modules. For example, if you're migrating between an EMC Clariion array and an HP EVA, the EMC PowerPath DSM and EVA multipathing DSM are known to step on each other's toes. In at least one real-world example I've witnessed, installing both at the same time resulted in a server that wouldn't boot as both DSMs attempt to manage each other's storage paths.
To avoid that problem, connect your new storage to your server without using multipathing. If both the source and destination SAN are Fibre Channel SANs, present only a single path to your new SAN while you perform your interim data synchronization. Once you've taken the server out of production to perform the final migration, uninstall the old SAN's multipathing DSM and unpresent the old storage.
After a reboot, install the new DSM and present the rest of the paths to the new storage. This process can add to your downtime window while you wait for the server to reboot several times, so make sure your estimate accounts for those extra minues. In fact, if you've already synced the data, you'll probably spend more moments screwing around with software than you will actually moving data.