Configuring storage on the wire
Keep iSCSI processing requirements, redundancy, and data protection in mind when building your IP SANFollow @pvenezia
There are three ways of connecting a server to an iSCSI SAN. You can use a standard gigabit NIC with a software iSCSI initiator, or an iSCSI accelerator such as the Alacritech SES2002 adapter I used with Windows, or a true iSCSI HBA such as the QLogic QLA4010 I used with Linux. The standard NIC and the Alacritech accelerator both require a software initiator, but the QLogic adapter has iSCSI smarts on the card itself, allowing it to offload iSCSI packet processing from the server’s CPU. Because a standard gigabit NIC places that load on the host CPU, periods of high I/O can put a substantial dent in the performance of the server -- not an issue with a true iSCSI HBA, which represents itself as a storage controller to the OS -- not merely a network interface -- and handles all iSCSI operations itself.
Server-side iSCSI storage configurations are generally very simple. After the storage device has been configured with the appropriate volumes and access rights assigned to those volumes, a software initiator -- such as the open source Linux iSCSI package for Linux or Microsoft's own iSCSI initiator for Windows -- needs only to be pointed at the IP address of the storage controller and the available volumes will appear, ready to be mounted on the server as block devices. For iSCSI HBAs, the configuration is identical, with the exception that the HBA control software is used to define the iSCSI targets.
For quite some time now, most servers sold by major server manufacturers have come with two gigabit NICs as standard components. Many admins use these second NICs as fail-over interfaces, using NIC teaming drivers to present a redundant network path for that server or to split traffic across the NICs for load balancing. Others simply leave the second NIC empty.
Of course, iSCSI is held to stricter rules than standard network connections. The loss of an iSCSI connection could potentially corrupt a database, while the loss of an ordinary network connection might cause an outage -- but rarely permanent data loss. You're smart to rely on redundant iSCSI connections whenever possible. With MPIO (Multipath Input/Output) support available from most vendors, it’s relatively simple to configure a redundant iSCSI infrastructure, and the cost is still substantially less than redundant Fibre Channel. Best practice dictates that you run your iSCSI network on a different segment than the production network, ideally through a pair of dedicated gigabit switches with jumbo frame support. And, of course, it's never a good idea to piggyback iSCSI traffic on the same NIC as regular traffic.
Any way you slice it, an iSCSI SAN is a low-cost way to bring big storage into your network.