Blade server review: Dell PowerEdge M1000e

Dell's M1000e blade system lags HP and IBM in features and options, but hits the mark in performance and price

In our January 2007 blade server shoot-out, Dell was the dark horse candidate that posted impressive performance numbers, but fell short on features compared to the other solutions. In the intervening few years, Dell has clearly taken the time to polish up its solution. The Dell PowerEdge M1000e is far more attractive and functional than its predecessors.

In today's M1000e, a brand-new set of chassis management tools offer many features suited for day-to-day operations, and the chassis-wide deployment and modification tools are simply fantastic. The downsides include some lack of visibility into chassis environmental parameters and the absence of multichassis management capabilities. Unless you put external management tools to use, each Dell chassis exists as an island.

The main selling points of the Dell blades system are the density and price. The M1000e makes a great virtualization platform, but would do well in just about any situation. It doesn't offer some of the expansion of the HP chassis, but does offer similar features to the IBM solution, but if you have no need for internal storage or centralized multichassis management, it's a great solution.

Chassis and blades
The M1000e blade enclosure squeezes 16 half-height blades into a 10U chassis with six redundant hot-plug power supplies, nine hot-plug fan modules, six I/O module slots (supporting Dell PowerConnect gigabit and 10G switches), three different Cisco modules (supporting gigabit internals and 10G uplinks), a Brocade 8Gbps FC module, and both Ethernet and 4Gbps FC pass-through modules. If InfiniBand is your flavor, there's a 24-port Mellanox option as well.

On the front of the chassis is a 2-inch color LCD panel and control pad that can be used to step through initial configuration and to perform chassis monitoring and simple management tasks.

12355792393814.png
12388481768916.png
12378743819439.png
12355792399603.png
12388481769965.png
12372119206773.png
12355113543399.png
Test Center Scorecard
 
 20%20%20%20%10%10% 
Dell PowerEdge M1000e998889

8.5

Very Good

The blades used in this test were Dell PowerEdge M610 units, each with two 2.93GHz Intel Westmere X5670 CPUs, 24GB of DDR3 RAM, and two Intel 10G interfaces to two Dell PowerConnect 8024 10G switches in the I/O slots on the back of the chassis. If there's only a single switch in the back, only one port will be active per blade. This is a limitation shared by all the chassis tested.

The blades themselves have a very solid, compact feel. They slide easily in and out of the chassis and have a very well-designed handle that doubles as a locking mechanism. The blades are fairly standard, offering two CPU sockets and 12 DIMM slots, two 2.5-inch SAS drive bays driven by a standard Dell PERC RAID controller, two USB 2.0 ports on the front, and a selection of mezzanine I/O cards at the rear to allow for gigabit, 10G, or InfiniBand interfaces. An internal SD card option permits flash booting of a diskless blade, which can come in handy when running embedded hypervisors like VMware ESXi. There's also an SSD option for the local disk.

One drawback to the Dell solution compared to the HP blades is the relative lack of blade options. Dell offers several different models of blades, but they're all iterations of the same basic compute blade with different CPU and disk options. There are no storage blades or virtualization-centric blades. You'll get two or four CPUs, DIMM slots, and two 2.5-inch drive bays in each. Some of the blades do offer internal SD-card options for booting embedded hypervisors like VMware ESXi.

Unlike the HP and IBM blade systems, Dell's setup doesn't have virtualized network I/O. The 10G pipe to each blade is just that, a raw 10G interface without the four virtual interfaces provided by HP's Virtual Connect and IBM's Virtual Fabric. This means that the onus of QoS and bandwidth limiting and prioritization falls to the OS running on the blade, or the QoS present in the PowerConnect 8024 10G modules. On one hand, this is a drawback, but on the other hand, it simplifies management in that the PowerConnect 8024 10G switch is really a switch and can be configured as such. No specialized management structure is necessary, unlike with the HP and IBM solutions.

Management tools
One of the knocks on Dell's blade solution has always been the spartan management tools. Although functional, they were definitely not overly featured. That changes with this test, however, as Dell introduced a completely rebuilt Chassis Management Console that offers a wide range of new features.

Leveraging a bit of AJAX magic, the new CMC is a highly functional and attractive management tool. A lot of thought has gone into making it simple to push actions to multiple blades at once, and even demanding tasks such as BIOS updates and RAID controller firmware updates can be pushed to groups of blades with a few clicks right from the CMC.

It's also easy to get an idea of the overall chassis health, as well as the status and particulars of any given blade or other component. The main page displays highlighted images of the chassis that you simply mouse over for details. By clicking on a blade, for example, you can get all the relevant information on that blade, including the hostname, operating system, iDRAC (Integrated Dell Remote Access Controller) version, MAC addresses, and so on.

Generally speaking, when I expected to find a particular piece of information on a page in the Web UI, I found it. One notable exception was specific temperatures within the blades and chassis. There's definitely thermal monitoring, but you can't pull the temps of individual components directly from any management tool.

Setting up and configuring a new blade is extremely simple. Drop the blade in, choose either DHCP or a static IP address for the iDRAC management controller, wait for the blade to boot, and then click through the CMC's pages for that blade to launch the Java-based remote console. Most features you might need when building or troubleshooting a blade problem are available right from that console, so you won't have to flip back to the CMC management pages. I was also quite impressed with the console's nearly perfect mouse tracking on Windows.

Building out the blade with an operating system can be done in several different ways, including direct ISO mounting from a local NFS or SMB share, rather than mapped from the client system. This means that if you're 1,000 miles away from the chassis, you can mount and boot off an ISO image stored at the remote site just as easily as you would by mapping that ISO from your laptop or workstation, but you get the obvious benefit of greatly increased throughput from a local connection.

Naturally, you can mount CD/DVD and floppy images or physical media from the client as well. After that, it's just a matter of a normal OS install. I did run into a few snags with the remote console revolving around remote media mounting. Occasionally when disconnecting a mounted ISO, the remote console app would suddenly quit, forcing a restart. This may have been related to the version of Java on the Ubuntu Linux laptop I was using for the management tasks, but it happened several times. Nevertheless, each time I was able to get right back to where I was and complete the ISO remount without issue.

I also tested the remote management tools across a VPN linked through a relatively high-latency connection, like you might find in a hotel. It was a bit sluggish, but usable. The trade-off for Web interfaces is always functionality and grace versus weight and speed, and Dell has found a reasonable medium.

Dell's Java-based remote console application proved quite complete, offering nearly every possible option, including power and drive-mounting functions. It worked extremely well in all cases and did not seem at all fragile, unlike some others. Unfortunately, it doesn't function on Mac OS X, but it does run on Linux and Windows.

Like HP and IBM, Dell offers a larger management package in Dell OpenManage that can manage groups of servers and blade chassis. While OpenManage wasn't strictly part of the test, it deserves mention.

Dell has implemented dynamic power and cooling features in the M1000e chassis. This means the chassis can shut down power supplies when the power isn't needed, and it can ramp the fans up and down depending on load and the location of that load. Thus, if only a few blades in slots one through three are working hard, the fans behind those blades will spin faster while the other fans spin at normal levels. This decreases power draw to some degree and still ensures that the cooling is present where necessary. As a result, the Dell solution was roughly on par with the HP chassis in terms of power draw during our two-blade power test, averaging just under 1kW at idle and about 1.25kW with the two blades under load. This was lower power utilization than the IBM BladeCenter H, which lacks the Dell's and HP's dynamic power and cooling features.

Dell offers lots of punch in the M1000e and has really brushed up the embedded management tools. As the lowest-priced solution except for the budget-conscious Supermicro entry, the M1000e has the best price/performance ratio and is a great value.

[ Return to "Blade server shoot-out: Dell, HP, and IBM battle for the virtual data center" | Read the review of the Dell PowerEdge M1000e, HP BladeSystem c7000, IBM BladeCenter H, or Supermicro SuperBlade. ]

Related stories:

This story, "Blade server shoot-out: Dell PowerEdge M1000e," was originally published at InfoWorld.com. Follow the latest developments in servers, processors, and other hardware at InfoWorld.com.

Copyright © 2010 IDG Communications, Inc.

InfoWorld Technology of the Year Awards 2023. Now open for entries!