EMC VNXe 3100: Sweet entry-level NAS and SAN

EMC delivers an all-purpose, unified storage array tailor-made for the IT generalist and the small-business budget

Page 4 of 5

Storage controller virtualization
On a side note, the choices you’ll make when you create new iSCSI and Shared Folder servers on the VNXe expose one of the interesting underpinnings of the VNXe platform. Though it's not made clear from the management interface, what you're actually doing when you create these servers is instantiating virtualized instances of code refactored from the block-level FLARE code that was born on EMC's Clariion platform and the file-level DART code that hails from the EMC Celerra line. In other words, you're deploying multiple virtual SAN or NAS arrays running on a single platform.

This approach allows EMC to cheaply leverage the wealth of experience gained in developing and maintaining those two enterprise product lines and to deliver it in a small package very cheaply. However, there are a few drawbacks in its current incarnation. For instance, you can't easily move an iSCSI volume created on an iSCSI server bound to storage processor A to a different iSCSI server bound to storage processor B -- something that would be relatively easy to do on a traditional FLARE-based Clariion.

Adding and maintaining storage
The next task on my evaluation checklist should be familiar to anyone who's owned any kind of centralized storage for any length of time: adding more storage. To start with, I slid the six 1TB NL-SAS disks into the array and watched as their LEDs cycled through a variety of initialization and warning states, until they eventually turned green as they spun up. Only a few seconds later, they were available in Unisphere, and I was able to add five of them to the Capacity disk pool (auto-allocating one to the Hot Spare pool as with the first set of disks). Frankly, it could not have been any easier.

However, all was not perfect in the storage management landscape, thanks to a combination of operator error and the very same automation and integration that made the VNXe so easy to connect to my hosts in the first place. I wasn't paying a great deal of attention while I cleaned up a few volumes that I had finished testing and deleted a volume that still had active VMs on it. Unisphere does prompt you to confirm a deletion, but like all IT gangsters, I don't read dialogs and hit OK reflexively.

That I mistakenly deleted the wrong SAN volume isn't the interesting fact here. Anyone who isn't careful when they delete things deserves what they get. What is interesting is that the VNXe's vSphere integration was comprehensive enough to reach into vSphere to cleanly delete the VMFS volume that this SAN volume was providing the storage for. That attempt failed because vSphere knew it had registered VMs on the volume and will generally not let you delete a volume that it knows it is using.

At this point, as vSphere gallantly resisted Unisphere's repeated instructions, I thought I had been saved. But then the VNXe gave up trying to get vSphere to do the dirty work and unceremoniously deleted the VMFS volume itself. All I could do was helplessly watch Unisphere display an utterly noninteractive "Deleting..." notification while the VNXe lowered the boom over vSphere's strenuous objections.

The VNXe's vSphere integration is hampered by other "Unisphere knows best" rough edges. For instance, there's the insistence that vSphere create a VMFS3 volume rather than VMFS5, forcing you to delete the volume and reformat it with VMFS5 if you want a native VMFS5 volume. For the same reason, Unisphere will not allow you to create a VMFS volume larger than 1.9TB -- the limit for VMFS3, but not VMFS5. Of course, you can get around these problems by deploying the storage as a Generic iSCSI volume (eschewing all of the integration by doing everything manually) or just using NFS, but they highlight one of the dangers of integrating too tightly with external software.

| 1 2 3 4 5 Page 4
From CIO: 8 Free Online Courses to Grow Your Tech Skills
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.