iSCSI storage networking: What you need to know

FREE

Become An Insider

Sign up now and get free access to hundreds of Insider articles, guides, reviews, interviews, blogs, and other premium content from the best tech brands on the Internet: CIO, CITEworld, CSO, Computerworld, InfoWorld, ITworld and Network World. Learn more.

As simple as iSCSI is to get running, configuring it to perform optimally requires a solid knowledge of how it actually works

Over the past two weeks, I've written about some of the commonly overlooked aspects of building a bulletproof IP storage network and how to best use that network with NFS. This week, I'll show you the ins and outs of configuring iSCSI for performance and redundancy, as well as how it compares with NFS.

The first thing to understand is that although NFS and iSCSI are both IP storage protocols supported by many server operating systems and hypervisors, that's about as far as the similarity between the two extends. NFS is a high-performance file-sharing protocol in the same vein as SMB/CIFS, while iSCSI is a block-level storage protocol more akin to Fibre Channel in that it encapsulates raw SCSI commands. This distinction is important because it has significant bearing on how you get redundancy and performance scalability.

iSCSI vs. NFS on the network: How they differ

When comparing the networking requirements of NFS and iSCSI, the largest practical difference is that the iSCSI protocol has built-in redundancy and link aggregation capabilities, called multipath input/output (MPIO). NFS lacks MPIO. You can use MPIO to offer both link redundancy and additional throughput -- entirely without the use of the problematic NIC/link teaming that's required to do the same for NFS.

In a typical NFS configuration, servers are each configured with a single-storage IP address bound to a NIC team spread across two stacked switches. On the storage side, the same configuration is duplicated, except that the storage is generally configured with a second IP address alias to allow better load balancing over the team members because the NIC teaming algorithms use source and destination IP address to load-balance traffic. In this scenario, the failure of an individual link or entire switch stack member is handled by the NIC teaming software -- the NFS client and server software isn't involved.

After the array is up and running, it's time to configure the vSphere host. On your host, you should have at least two NICs dedicated solely to iSCSI traffic. You add those NICs to a single vSwitch much in the same way you configure other vSphere networking teams. After that, you need to add three VMKernel IP interfaces to that vSwitch. The first interface is configured to give the vSphere management stack visibility to the storage array so that it can ping it and determine if it's alive (called a heartbeat VMK), while the other two actually move storage traffic.

Before you can bind (assign) the two storage VMKernel interfaces to the iSCSI initiator, you need to reconfigure their teaming properties so that one VMKernel interface is assigned to only use the first physical NIC attached to the vSwitch and the other interface only uses the second physical NIC.

Unlike in a normal team where you want both interfaces to be available if one of the physical NICs had a link failure, in this scenario you specifically don't want that -- you want to let the iSCSI initiator handle the link failure. This is also why the first heartbeat VMKernel interface is needed: The first defined VMKernel interface on that vSwitch always needs to be able to ping the storage regardless of which link might have failed.

After you have the vSwitch and VMKernel interfaces configured properly, you can enable the Software iSCSI initiator and bind the second two VMKernel interfaces on your storage vSwitch to it. Then it's simply an issue of punching the array's group IP address into the discovery IP addresses section of the iSCSI initiator configuration.

During the storage rescan that happens after you save the configuration, each VMKernel interface should connect to the array's group address, at which time the array refers those interfaces to connect back to its individual interface IPs. At the end, you should have four active iSCSI connections that effectively build a fully redundant mesh across both host NICs and both array NICs.

To continue reading, please begin the free registration process or sign in to your Insider account by entering your email address:
From CIO: 8 Free Online Courses to Grow Your Tech Skills
Join the discussion
Be the first to comment on this article. Our Commenting Policies