Reliable storage is a key building block for any enterprise application infrastructure. Traditionally, this translates into some type of network-attached storage (NAS) or storage area network (SAN). With Windows Storage Server 2012, you get both NAS (CIFS/SMB) and SAN (iSCSI), along with the ability to leverage new SMB 3.0 features to bring seamless fail-over to Hyper-V and SQL Server workloads. With the Windows Storage Server 2012-based HP StoreEasy 5530, you get all that plus outrageous performance.
HP has stuffed a lot of hardware into this package. The StoreEasy 5530 consists of two HP ProLiant BL460c G7 blade servers, each equipped with one Intel E5620 Xeon processor and 24GB of memory. The Intel E5620 is a four-core processor capable of two threads per core. That gives you 16 threads in all for handling all the file processing you can throw at it. It would also be more than enough CPU and memory to support a few Hyper-V virtual machines. However, HP does not support running VMs on the StoreEasy box. This is strictly a storage platform.
Two storage options offer either large-form-factor (LFF) or small-form-factor (SFF) hard drives. The LFF option typically delivers more storage per drive at a lower cost, while the SFF drives offer higher performance but less total storage capacity. Each server blade comes with two 300GB 10K SFF drives, which are typically configured as a RAID 1 array for redundancy purposes. These are used primarily for boot and local storage.
On the networking side, multiple connection options include an HP NC365T PCIe quad-port gigabit server adapter card, an HP NC382m dual-port 1GbE multifunction BL-c adapter, and an HP NC553i dual-port FlexFabric 10GbE converged network adapter. That works out to four Gigabit Ethernet ports and two 10GbE ports per server blade. There's also an internal gigabit connection between the blades for the cluster heartbeat, and individual gig ports to each blade for out-of-band management.
I had to use the iLO method to access the second node and complete the setup tasks. By default, manually powering up the enclosure only activates the first blade server. To juice the second blade, you'll either have to press its power button on the front panel or use the remote command-line interface (CLI). Once I connected to the remote console, I was able to complete the necessary steps without incident.
Three HP icons on the Administrator's desktop launch different tools to manage the system. The first is a tool to configure access to the enclosure manager (see Figure 1). This tool lets you change the names used for the enclosure, set a fixed IP address, change the password, and generate the public/private key pair required to access several of the other management tools. The other two HP icons launch the customized management console and the HP iLO Web page. The Gen7 blades come with iLO version 3. A minor annoyance here is an apparent default incompatibility with the version of Internet Explorer in Windows Storage Server 2012. This breaks some of the graphics you'd normally see on a supported browser, but it's not a showstopper.
At the lowest level of management is the CLI to the enclosure manager. To access the enclosure CLI you must connect using Secure Shell (SSH) to the IP address of the device using a software tool like PuTTY. Once you have a session open, you can access a full range of commands, including the ability to power on or off both blade servers and the enclosure.
Achieving high availability at the software level means clustering, and Windows Storage Server 2012 does that out of the box. HP has added a number of wizards to the standard Microsoft setup and configuration tools. One of the requirements for configuring a cluster with Windows Server 2012 is that the clustered computers must be joined to a domain. This presumes the existence of a Domain Controller (DC). If you intend on running the StoreEasy 5530 as a stand-alone storage box, you'll have to install the DC role on one of the StoreEasy blade servers. You'll also need a DNS server -- Active Directory depends on it. As an alternative, you could run the DC in a virtual machine, which could be made redundant once you have the cluster up and running.
Wicked fast file serving
It's challenging to adequately describe just how huge this system is from a performance perspective. Microsoft uses its test suite, named the File Server Capacity Tool (FSCT), to characterize how many file and print users a storage device could theoretically support. For this particular configuration, the FSCT numbers are around 14,000, according to HP's tests. That's pretty ridiculous for a system that fits in a 3U space. You can bump that number up to 26,000 with the addition of external storage. A forthcoming HP white paper will provide all the details on how it tested the different configurations.
Many factors -- ranging from the RAID level used to configure the disks to the number of disks allocated to a storage pool -- can affect storage performance. One thing you won't get to test with the HP StoreEasy is Microsoft's Storage Spaces feature. Storage Spaces, which requires disks configured using JBOD or "just a bunch of disks," provides a type of software redundancy similar to what you get with RAID but without any hardware assistance. Few performance numbers for Storage Spaces have been published outside of Microsoft at this point, so the jury's still out. The hardware controller in the StoreEasy 5530 does not support JBOD.
I used the open source tool Iometer to test a basic SMB shared volume with a total of 16 drives configured using RAID 5. HP provided an Iometer configuration file that included a number of different workloads to simulate various types of real-world traffic. With the 512K sequential read/write test, I saw similar results to those published in the HP performance white paper. You can see the network load generated by Iometer using an HP ProLiant DL560 running Windows Server 2012 Standard in Figures 3 and 4. The DL560 was connected to an HP ProCurve 3800 switch over a single 10GbE port; the StoreEasy 5530 had two 10GbE ports connected to the switch, one from each blade server.