Crafting a management interface that is both easy to use and powerful is no simple feat. Although EMC has decidedly erred on the side of making Unisphere a cinch to use (sometimes to the frustration of seasoned admins who need to find certain bits of information), it is possible to dig into most of the nitty-gritty if you like. Many times, that requires clicking a Show Advanced link to uncover options such as multipathing, interface teaming, and jumbo framing. However, many of these are features that small-business users aren't likely to need and that experienced storage pros will know to dig for.
After logging into the Unisphere interface, I was prompted to configure the installed disk. Unisphere makes this process fairly simple by employing a concept of storage pools. These would typically include a Performance Pool that might contain your high-speed SAS disks, a Capacity Pool that would contain the larger and slower NL-SAS disks, and a Hot Spare pool that would contain -- you guessed it -- hot spares for each type of disks you have deployed. You can also define your own pools, allowing you to segregate different workloads onto different physical disks. In my case, the array came with six 300GB SAS disks, so I dumped five of them into the Performance Pool and allowed the sixth to fall into the Hot Spare pool by default (though I could have overridden this if I wanted to).
After the storage pool was deployed, my first goal was to run through the basic tasks of configuring my first iSCSI (block) and Shared Folder (file) servers. Unlike many other arrays, which are iSCSI-only or NAS-only solutions, the VNXe allows you to deploy both. For example, you might create two iSCSI servers (one tied to each of the two controllers in a dual-controller array) and two Shared Folder servers (split between the two controllers), or you could have one controller manage your Shared Folder services while the other owned all iSCSI services.
How you configure the array will vary depending upon what mix of file and block services you intend to deploy and how you want to spread that load across the controllers (a topic that requires some reading into if you want to optimize the performance). At first, I configured a single iSCSI server that, by default, assigned itself to the first storage processor and one of the two NICs.
After that, I was ready to deploy a new volume to my vSphere hosts -- and here's where things get interesting. Unlike just about any other entry-level array I've worked with, the VNXe will actually configure the vSphere host for you. All you need to do is provide host or vCenter authentication credentials; it does the rest. This includes configuring the VNXe's IPs in vSphere's iSCSI Initiator, rescanning the HBA, finding the iSCSI device, and formatting the device with the VMFS file system -- all tasks you'd normally have to perform manually.
You literally can go from having a completely untouched VNXe sitting next to a few basically configured vSphere hosts to having a VMFS or NFS volume created and attached to your vSphere hosts in well under an hour. This speed of deployment isn't unusual in the marketplace today, but it is extremely unusual for an EMC product -- especially one with such a wide range of features.
You may still be better off sticking with Win7 or Win8.1, given the wide range of ongoing Win10...
Now that we're down to the wire, many upgraders report that the installer hangs. If this happens to...
Based on a technique created by a German blogger, here's how to stop wasting hours checking for Windows...
Sponsored by Hewlett Packard Enterprise
Sponsored by Intel
The Swift community offers a host of tools to help power Apple’s soaring programming language
It's all us techies' fault in creating robots and machine intelligence that can do a better job than...
Microsoft-Qualcomm deal puts Windows 10 and Win32 apps on ARM-based PCs, using several techniques