Review: Dell blade servers tip the scales

Dell's M1000e blade system wows with novel new blades, improved management, modular I/O, and 40G out the back

Page 3 of 4

Finally, the Dell PowerEdge M420 may be the best and most interesting blade of them all. This is a quarter-height, two-socket blade housed in a full-height sleeve that holds four of these little blades vertically. Each M420 has one or two Intel E5-2400-series CPUs and up to 192GB of RAM, but only six DIMM slots -- three per CPU -- and no hard drive options. The local storage is handled by either two hot-swap 1.8-inch SSDs or the embedded SD cards for hypervisor installations.

The M420 has two 10G interfaces built in, and it can handle a single mezzanine I/O card, so you could drop four 10G interfaces in this quarter-height blade. Alternatively, you could have two 10G interfaces and two 8Gb Fibre Channel or InfiniBand interfaces. That's a lot of I/O in a very small package.

Somewhat surprising, there are no population restrictions on these blades. You can fit 32 of these little servers in a single chassis. That's 64 CPUs with up to eight cores each, or 512 cores in a single chassis. If you drop the beaucoup bucks to max out the RAM with 32GB DIMMs, you could accompany those cores with more than 6TB of RAM. That's some serious density.

Quarter-height PowerEdge M420 blade servers allow you to squeeze as many as 64 CPUs, and a whole lot of I/O, into a single M1000e enclosure.
Quarter-height PowerEdge M420 blade servers allow you to squeeze as many as 64 CPUs, and a whole lot of I/O, into a single M1000e enclosure.

Breaking out of the box
The M1000e chassis I/O capabilities have grown as rich as the blade options, due in no small part to Dell's acquisition of Force10 Networks. To the basic 1G passthrough, Dell PowerConnect, and Cisco I/O switching modules available previously, Dell has added the Force10 MXL I/O switching module, which boasts 32 internal 10G interfaces and two external 40G interfaces, with two FlexIO module slots for further 10G fiber or copper expansion. This is undoubtedly a significant advancement for Dell, not the least of which is the capability of 40G uplinks from those switches. Further, up to six of these switches can be stacked, allowing the switching for multiple chassis to be consolidated and centrally managed.

However, the MXL and chassis integration is not yet fully baked. While the switches behave as you'd expect, they represent their internal 10G interfaces generically -- such as TenGigabitEthernet 0/1, TenGigabitEthernet 0/2, and so forth -- and there's no simple way to map those ports back to the blades they're connected to. If you're looking to configure the 10G port for the second 10G interface in the M620 blade in slot 7, for instance, you will need a chart to figure out which interface that corresponds to on the MXL. When you're faced with configuring four or six 10G interfaces per blade or a fully loaded chassis composed of 32 M420 blades with 64 10G interfaces, that will get really confusing really quickly.

Tighter integration between the switching modules and the chassis itself is needed to provide those mappings within the switch CLI. Network administrators don't like to have to refer to spreadsheets to find out which port they need to tweak.

Aside from this, the Dell PowerConnect M8024 module is available with 16 internal 10G ports and up to eight external ports using the FlexIO slots in the module. There are 4Gb and 8Gb Fibre Channel modules, including the Brocade M5424, two InfiniBand modules supporting either QDR (Quad Data Rate) and DDR (Double Data Rate), and more basic 1G switches. There are also passthrough modules for 10G, 1G, and Fibre Channel ports.

Dell has added Switch Independent Partitioning or NIC partitioning functionality, which allows the 10G interfaces on each blade to be carved up into four logical interfaces with various QoS and prioritization rules attached to each logical interface. The OS sees several independent interfaces that are all subsets of the 10G interface, allowing administrators to allocate bandwidth to various services at the NIC level. This is a welcome addition that's been missing in previous Dell solutions.

Blade management en masse
Beyond the management of the new Force10-based switches, the overall management toolset present in the M1000e is quite extensive. Dell has paid much attention to the needs of higher-density blade chassis management and has taken steps to reduce the repetitive tasks associated with blade infrastructure.

By leveraging the iDRAC remote management processors in each blade and the Dell CMC (Chassis Management Controller) tools, the M1000e makes it both simple to perform tedious tasks such as mass BIOS upgrades and very easy to dig into the specific information about each blade. With a single click, you can retrieve a display containing every firmware version across the server, including its installation date; another click brings specific information on every hardware component, from individual DIMMs to what's on the PCI bus. It's extremely handy.

The Dell Chassis Management Controller puts blade server details and alerts right at your fingertips.
The Dell Chassis Management Controller puts blade server details and alerts right at your fingertips.
| 1 2 3 4 Page 3
From CIO: 8 Free Online Courses to Grow Your Tech Skills
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies