Blade server shoot-out: Dell vs. HP vs. Sun

InfoWorld's head-to-head comparison proves blade servers are sharp enough for enterprise use

1 2 Page 2
Page 2 of 2

Dell PowerEdge 1955 Blade System

Common thought may lead one to believe that Dell is somewhat behind the curve in the blades world. Much more time and ink has been spent on the blade technology available from Dell's competitors, and hence, Dell doesn't enjoy the mindshare of Sun, HP, and IBM. Even we didn't expect Dell to put up too much of a blade showing.

Common thought turned out to be far from the truth. Whereas the other vendors spent six to eight hours of their testing day working to get the SPEChpc benchmarks running properly and with the best results possible, Dell ran the full benchmark suite in their 90-minute preparation period the day before their official testing day — and those 90 minutes included their initial chassis powerup and system check procedures.

Not only that, but the Dell PowerEdge 1955 produced the best SPEChpc numbers by far of any of the blade systems tested. Color us surprised, and not a little chagrined at our original assumptions.

Dell's high marks on the SPEChpc tests have plenty to do with the hardware, but they're also the result of heavy tweaking and preparation by the Dell engineers. It's clear they're serious about HPC performance.

The PowerEdge 1955 solution isn't quite as physically elegant as the others in this test, and it certainly lacks the panache of the HP BladeCenter's LCD panel. Dell also uses larger blades than the dual-socket HP BladeSystem c-Class, but packs quite a bit of horsepower into the 7U chassis, which is the smallest of the tested solutions.

Ten blades fit into a single Dell chassis, resulting in a single rack density of 60 dual-socket systems. With the soon-to-be-released quad-core Intel chips, this equates to 240 cores per rack, with a maximum power draw of 3.6 kW.

Each blade sports two 2.5-inch SAS or SATA hot-swap drives with hardware RAID0/1, expanding up to 32GB of DDR2 RAM and dual or quad-core CPUs. The external I/O layout is similar to the HP solution, with integrated switching across a passive midplane and either an integrated Cisco 3030 or a Dell PowerConnect 5316M gigabit blade switch module. Straight gigabit Ethernet pass-through modules are available, as well.

On the FC (Fibre Channel) side of the aisle, you have both McData and Brocade 4GB FC switch modules available, as well as a pass-through module. The PowerEdge 1955 handles InfiniBand with a Topspin pass-through module providing a single port per blade.

In the lab, our PowerEdge 1955 chassis sported a PowerConnect 5316M switch, which is accessible at the console level via the chassis management tool's CLI. Of the 16 ports on the switch, 10 are reserved for blades at one port per blade, and the other six are broken out into RJ45 ports on the back of the module.

We successfully trunked this module to a Cisco 4948-10G switch to provide 6Gb of throughput to the main lab network. It would be nice to see 10 Gigabit Ethernet support in this chassis, but then, none of the blade systems we evaluated could do 10 Gig — yet.

Access made easy

One of the Dell system's unique features is the integrated KVM switch. It's a Dell-branded Avocent switch that has internal connections to each blade, and breaks out into a standard PS/2 and VGA port via a dongle on the back of the chassis. This permits quick and easy direct KVM access to each blade, and can uplink to another KVM switch relatively easily.

Further, this same KVM module doubles as an Avocent digital KVM port, permitting instant integration into another Avocent/Dell KVM switch to make management even easier. Each blade also has a front-mounted dongle connector that can support a directly connected monitor and keyboard. It's the best direct (non-IP) console management of any blade system.

The PowerEdge 1955 Blade System would be quite at home in a standard datacenter running a single server per blade, in an HPC environment serving as a low-footprint collection of compute nodes, or in a virtualization scenario (the Intel VT extensions are available, but disabled by default). In fact, when VMware Virtual Infrastructure Server 3 was evaluated in the lab, VMware engineers chose to use the Dell chassis to run all their tests — partly because the Sun Blade 8000 system was still in use re-running the SPEChpc tests, but also because they were sure that there were no compatibility issues with the Dell blade system, and it had the performance levels they needed.

Compared with the Sun unit, the Dell PowerConnect's I/O options are relatively limited but those available are enough for most architectures. The small form factor, reasonable power draw, and overall performance reflects well on Dell engineering, and results in a well-priced and well-appointed product.

Blade futures: 10 Gig ahead

After seeing all three of these blade solutions in action (and cleaning up the broken coffee press), we couldn't ignore the results: Blade technology is undergoing a renaissance of sorts.

Vendors are taking advantage of newer, less power-hungry CPUs and branching out into new levels of I/O that directly combat the common complaints about blade systems, such as heating and cooling concerns and management difficulties. As more infrastructures move toward centralized storage and virtualization, it's impossible to miss the impact that blade systems like these will have.

The near future will introduce another key element into the blade server picture: 10 Gigabit Ethernet. The three blade solutions we tested still rely on link aggregation of individual gigabit Ethernet ports or pass-through interfaces to deliver enough bandwidth to a single blade chassis, but all vendors are currently developing 10 Gig modules that will deliver a one-two punch of significantly reduced complexity and cabling. Once these modules are available, an entire chassis can run with only two 10 Gig connections and power cabling — and costs will decrease even further as 10 Gig ports drop in price.

That doesn't mean the adoption battle is over. The toughest challenge these systems face isn't one of providing the right mix of power and connectivity options, but rather the real-world planning requirements. It's easy to buy one or two 1U servers that may slide beneath the purchasing limits of many IT departments, but it's harder to push through requisitions for the tens of thousands of dollars necessary for a blade system. Without immediate justification for half a dozen or more servers at a time, it may not be possible at all until it's time for a wholesale server refresh.

However, it's easy to justify a blade system when looking at virtualization, as it's cheaper to ramp up virtual servers in a blade-based infrastructure — not to mention the overt cooling and power cost reductions. The additional savings in cabling, switch ports, and administration overhead is harder to quantify, but certainly present.

The Dell, HP, and Sun blade solutions we tested have a wide price range, but the low-end cost of entry is getting lower just as the products are getting better. Blades aren't suitable for every infrastructure, but as our test results show, their increasing power and flexibility mean it's getting easier to justify them in the enterprise world.

Brian Chee, director of Advanced Network Computing Laboratory at the University of Hawaii's School of Ocean and Earth Science and Technology, contributed to this article.

InfoWorld Scorecard
Serviceability (10.0%)
Management (15.0%)
Availability (25.0%)
Value (10.0%)
Scalability (20.0%)
Performance (20.0%)
Overall Score (100%)
Dell PowerEdge 1955 Blade System 8.0 7.0 8.0 8.0 9.0 9.0 8.3
HP BladeSystem c-Class 9.0 8.0 8.0 8.0 9.0 8.0 8.3
Sun Blade 8000 Modular System 8.0 9.0 8.0 8.0 9.0 7.0 8.2

Copyright © 2007 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2