Virtualization continues to make huge inroads, thanks to the obvious flexibility and reliability benefits. And getting the most out of virtualization almost always requires some kind of shared storage. Otherwise, features such as live virtual machine migration and automated virtualization host failure recovery simply aren't available.
In many businesses, particularly large and medium-sized ones, most shared-storage implementations end up being IP storage -- sometimes NFS, but generally iSCSI. IP-based storage is an excellent fit because it employs the same networking hardware and concepts that network admins are already familiar with. And it's easy to get up and running.
But just because it's easy to fire up doesn't mean it's easy to do it right. Though less expensive than Fibre Channel, IP-based storage can actually be more complicated to configure optimally than FC storage. It's not as simple as punching in a few IP addresses. Constructing a reliable, high-performance network to support an IP storage infrastructure requires careful attention to a variety of different factors.
Building a bulletproof network
In a basic sense, VLANs are a means to create multiple virtual switches within a single piece of switching hardware. A switch that's capable of implementing VLANs (just about any managed switch) will ship with all ports configured to be in the default VLAN (VLAN 1). After you create a new VLAN on the switch and attach a collection of switch ports to that VLAN, they will function as if completely isolated from the other ports -- as if devices plugged into those ports are actually on a different switch.
Let's say you have a pair of 24-port managed gigabit switches to which you'd like to attach an IP SAN, three virtualization hosts, and a collection of existing switches and other network devices. You might leave ports 1 to 16 in the default VLAN1, then configure ports 17-22 to be in VLAN2. The SAN's storage interfaces and a pair of interfaces from each virtualization host would be evenly split across the VLAN2 ports in both switches while everything else would be split over the ports in VLAN1 -- essentially giving you the same performance and security you'd have if you bought four switches instead of two.
Next comes an important piece of the puzzle: configuring the cross-connection between the two switches. Since you'll be splitting all your devices across the two switches, you need to provide them with a means for devices on one switch to talk to devices on the other switch. That's usually as simple as attaching the two switches to each other, VLANs complicate things a bit. If you simply run a cable from port 24 on the first switch to port 24 on the second, your VLAN1 devices will be able to talk to each other across that link, but the VLAN2 devices won't. To allow them to talk to each other, you need to configure that port on each switch to be VLAN-tagged for the VLANs you want to communicate with each other.
VLAN tagging, sometimes called "trunking," utilizes the 802.1q standard to allow traffic from multiple VLANs to pass over a single physical link. It does this by inserting a four-byte VLAN tag into each packet that's not part of the default VLAN as it passes from one switch to the other. The second switch recognizes that tag, strips it off, and sends the traffic into the VLAN indicated by the tag. This tech prevents you from needing to dedicate a link for each VLAN you want to exist on each switch.
You can avoid that eventuality by having each workstation switch attach to both of your new core switches. However, doing so is not as easy as just running a second connection to the second core switch and plugging it in. In order to make that connection properly, you have to consider the effects of STP (Spanning Tree Protocol) and configure things properly ahead of time. Otherwise, you risk taking down your whole network by creating a network loop or ending up with poorly performing IP storage as a result.
In essence, STP is designed to prevent network loops from occurring. Without STP, if you were to create a loop by plugging three switches into each other (switch A to B, B to C, and C to A), any broadcast packet would race around that ring endlessly -- quickly resulting in network saturation and a very bad day at the office. To avoid this, STP identifies and disables network links that would cause a loop. If enabled, STP will do a good job of preventing network loops without any attention from the network admin.
But be aware that STP may do so in a way you might not want. In the example above, if switches A and B are your core switches, attached with a 2Gbps link, appending a workstation switch (switch C) to both A and B could result in STP disabling the link between A and B to avoid the creation of a loop -- thereby forcing all cross-switch traffic between A and B to flow through switch C. You definitely don't want that.
The devil in the details here is how STP decides which link to block when a loop is detected. The first thing that STP does (continuously as the network topology is modified) is to elect a so-called root bridge -- a switch that will act as the "root" of the network. After that, each nonroot switch on the network evaluates all of its available routes to get to the root bridge and will use the most direct path possible. If Switch C were to be elected the root bridge, the suboptimal configuration above might result.
Avoiding this requires setting appropriate root bridge priorities on switches A and B to ensure that, as long as either of them is active, one of them will be elected the root bridge. Switches generally ship with a default bridge priority of 32768, so it's just an issue of setting switch A and B to something lower (higher preference). Switch A might be set to 4096 and switch B might be set to 8192 -- ensuring one of them will always be the root. It's also important to note that many switches implement PVST (Per VLAN Spanning Tree), which means you need to set these priorities for each VLAN you've created.