Blade server shoot-out: Dell vs. HP vs. Sun
InfoWorld's head-to-head comparison proves blade servers are sharp enough for enterprise use
Dawn broke over Diamondhead on Oahu as I shrugged off my jetlag and drove to the Advanced Network Computing Lab at the University of Hawaii. It was a beautiful Saturday morning, but there was to be no lying on the beach today. By the time 6 p.m. rolled around, Brian Chee and I had uncrated half a dozen huge shipping containers; eaten more than our share of sushi; installed three out of four blade chassis; and broken four drill bits, one window pane, and a coffee press. Also, Brian's eyebrow had finally stopped bleeding from a brief but violent altercation with the business end of L6-20 plug.
[ See also: InfoWorld Technology of the Year Awards Hardware winners ]
All in all, it was a good day. This series of events marked the beginning of the InfoWorld blade server shootout. Three of the Big Four blade server vendors — Dell, HP, and Sun (minus no-show IBM) — presented their latest and greatest blade server products for our careful inspection. As the smoke cleared at the end of the week, it became obvious that the new crop of blade servers is a giant step up from the previous generation.
The test plan was actually quite simple: We conducted performance tests using the SPEChpc benchmarking suite and examined server management tools. Time was short, so each solution got only a single day to strut its stuff, but the vendors had more than a month prior to the test to prepare their wares. This included preparing the blades for the SPEChpc tests and installing any accompanying software.
We chose the SPEChpc tests not only because we're interested in the blades' HPC performance, but also because they would give each solution a thorough workout, extending to CPU, memory, and interconnect performance. We allowed vendors their choice of hardware, including the type of interconnect to be used, with the only requirements being that the SPEChpc tests were limited to 16 sockets and 32GB of RAM. Each socket might hold a dual- or quad-core CPU, and each blade might have two or four CPUs, but otherwise, the goal was to see the best of the best.
Big Blue's big blank
Test week started with a bang — or maybe a fizzle, depending on your point of view. At the very last minute, and after months of preparation, IBM pulled a no-show.
Whether this was due to internal coordination problems or fear of competing against HP, Sun, and Dell is open for speculation, but we made several attempts to get IBM back in the game. In fact, when first confronted with the news that they weren't going to make it to the lab, I broke the rules and extended IBM's deadline by 10 business days.
Those 10 days passed with nary a whisper from Big Blue. They followed up a week or so later claiming that they could deliver hardware to the lab in another two weeks, but given their track record, I wasn't going to hold my breath. The test had been over for two weeks, anyway. Too little, too late.
HP BladeSystem c-Class
First up on the block was HP's brand-new BladeSystem c-Class. The c-Class substitutes 2.5-inch SAS drives for the 3.5-inch SCSI drives found in the previous crop of HP blades, and it abstracts much of the blade hardware into a modular backplane that boasts 5Tb throughput. These two factors mean HP's blades are half the size of their predecessors, yet offer more connectivity options and processing power.
The chassis is a complete redesign, boasting a nicely trimmed up-front LCD panel display that can be used to configure a surprising number of chassis operating parameters. The panel has a Web UI counterpart that matches the display exactly, easing "remote hands"-type administration. Up to 16 blades can fit into a single 10U c-Class chassis with a maximum power draw of 3.6kW. The N+N power supply configuration is also nicely handled, with six hot-swap power units laying low at the bottom of the chassis.
Click for larger view.
One of the more attractive aspects of blade systems is the ability to mix and match different types of blades within a single chassis. The HP c-Class currently offers three different ProLiant processing blades: the BL460c, an Intel EM64T-based blade; the BL465c, the AMD Opteron counterpart; and the BL480c, a 2P Intel EM64T-based blade. In addition to these blades, HP also offers disk-only blades, which can handle as many as six 2.5-inch SAS drives that appear as local disks to the immediately adjacent blade in the chassis — a very nice touch.
Any of these blades may occupy a single chassis in any density. An interesting and welcome detail is the single internal USB port on each blade ostensibly present to allow use of a USB licensing dongle, because, unfortunately, many applications are licensed in this fashion.
Our c-Class review unit contained preproduction BL460c blades sporting the new Intel quad-core Xeon CPUs. Running at 1.866Ghz with a 4MB L2 cache, a 1066 Mhz FSB, and 4GB of RAM per socket, these BL460c blades proved quite powerful. They turned in respectable SPEChpc scores, due in no small part to the 16-socket limitation of the testing balanced against four cores per socket. However, the lower clock rate per core and limited FSB may have cost HP in the SPEChpc tests, as their scores fell generally in the middle of the three solutions. It's also quite possible that more time needs to be spent on compiler optimizations for these newborn chips.
Like all the other vendors, HP chose InfiniBand as the interconnect for the HPC tests using an external Voltaire switch. But unlike Sun's X8400 blades, much of the c-Class I/O is handled internally with switching modules. This backplane switching architecture provides a closer relationship between the blades and significantly reduces cabling, but proved to be problematic in the lab: The HP engineers struggled with odd issues relating to InfiniBand connectivity and performance throughout the testing. It wasn't until the very end of the day, in fact, that they were able to complete the SPEChpc suite to satisfy the testing requirements.
On the network I/O side, though, HP can run with Cisco switching modules to keep intrachassis communication within the chassis itself. The Cisco modules behave exactly like external Cisco switches, which will please network admins already familiar with Cisco's hardware. External uplinks take the form of eight gigabit Ethernet ports per switch module, resulting in a total of eight ports that can be trunked to datacenter core switches.
BladeSystem runs its own internal management console, accessible via the Web, that can stand alone or be integrated into an HP Insight Manager installation. Multiple c-Class chassis may be managed collectively in this manner, regardless of whether Insight Manager is in place, which is quite useful for large data centers. Administrator-driven tools offer a wide array of monitoring options, from current and maximum power utilization and environmental data to blade health and performance information.
Internal chassis management is even more impressive. The chassis has enough smarts to determine the heat and power loads present and advise admins on proper fan population and placement. It will adapt power supplies as needed and where needed, as well as drop power levels to quiescent blades when possible. This results in lower heat production and power consumption, which are hot buttons (pun intended) as far as blade development and deployment go.
Not everything with the HP blade, however, worked as smoothly as its chassis management. The console redirection available in each blade's ILO cards was somewhat lacking, with problematic mouse tracking and display artifacts. This can be very irritating when work must be performed directly to the console of each blade. The only other console redirection method involves using a front-mounted dongle port to connect a keyboard, monitor, and mouse directly to each blade. Not pretty, but it does work.
We also encountered a few oddities with the chassis, including one blade that seemed to have intermittent connectivity problems and another that spontaneously lost contact with its internal SmartArray RAID controller. It's highly likely that the pre-release nature of the blades contributed to these issues.
Nevertheless, HP completed the full SPEChpc benchmark suite runs at the small and medium dataset levels within a single day, complying with the testing parameters. A few lab hiccups aside, the BladeSystem c-Class is an impressive piece of engineering. The wide variety of blade options, including the disk-only blades; up-front display; adaptive power and cooling features; and density show that the c-Class definitely adheres to HP's "Invent" slogan.
Sun Blade 8000 Modular System
Although the other blade solutions in our test ranged in size from 7U to 10U, Sun's system came in the door at a whopping 19U. Of course, Sun's take on blades is a little different: It was the only blade solution to support four CPUs per blade, and can handle 10 blades per chassis. With dual-core AMD Opteron CPUs, this equates to 160 cores in a single 42U rack.
That rack had better have plenty of power and cooling, though, as the Sun Blade 8000 draws a significant amount of juice, requiring approximately 9kW (actual draw is generally lower). That's quite a lot compared with the HP and Dell blades, which pull roughly 3.6kW each. Luckily, the Sun system's density makes up for its power thirst.
It took Sun's engineers quite some time to get the tests up and running, and the Sun Blade 8000's results in the SPEChpc benchmark weren't the best. They did better in the SPECseis test, and I'm certain that if they were given more time to optimize the other two tests, Sun's overall results would have been much better.
Sun's X8400 Server Modules are large, each with two 2.5-inch SAS or SATA drives and a RAID0/1 controller. Memory expands to 64GB per blade with 4GB DIMMs, granting a fully populated chassis a total of 640GB of RAM across 40 sockets. Those sockets can hold Opteron 870s at 2.0Ghz, 875s at 2.2Ghz, or 885s at 2.6Ghz, all with 1MB L2 cache. And last week, Sun announced availability of AMD Rev F processors in its new X8420 Server Modules.
I/O options are plentiful. Each blade can handle as many as six different external I/O forms, and there are two different methods of delivering the physical I/O ports to the blades themselves. The X8400 is more focused on pass-through ports than using integrated switching; it has wide Network Express Modules that aggregate a single 8x PCI Express lane from each blade, as well as smaller Express Modules also leveraging a single 8x PCI Express lane, but built into the slim PCI-SIG form factor. These smaller modules reside at the top of the chassis, and are designed to provide more granular I/O access to each blade.
The two NEM modules in our test unit delivered four gigabit Ethernet ports to each blade. The InfiniBand interfaces slotted into the Express Modules, delivering two InfiniBand ports to each blade on a single module. This design is quite flexible and its hot-swap capability is certainly attractive.
Although the Sun Blade 8000 is technically a blade system, it fits the image of a modular server system. The raw horsepower available across each blade's four sockets and the impressive array of modular I/O options position the system directly into the HPC and virtualization arena. This is not a system to run simple Web or directory servers — unless they're virtualized.
Virtualization-ready
Because of its power, the Sun Blade 8000 really doesn't compare directly to the other blade systems in the test. The Dell and HP solutions can go three ways (standard server builds, HPC, and virtualization) but the Sun solution finds its sweet spot in HPC, high-end database, and virtualization tasks.
The Sun Blade 8000's hardware fits a virtualization build-out plan like a glove. Available I/O options are far better than the other blade systems, and the four sockets per blade, the NUMA (Non-Uniform Memory Access) inherent in the AMD Opteron technology, and maximum RAM supported all make virtualization a foregone conclusion. As a VMware engineer speculated during testing the week after the blade server tests, "Wow … at standard loads with quad-core CPUs, this thing could support 600 virtual machines all by itself." Enough said.
The 8000's management framework falls in line with Sun's N1 Network Manager, and the chassis' Web management interface is quite quick and usable. Of all the solutions tested, Sun's Java-based remote console application is the fastest and easiest to use, not to mention that it runs on all workstation platforms.
Sun's ILOM Web interface was not only the fastest, it was also the easiest to navigate of all three solutions. Working from the chassis Web UI, a single click will launch the console application with tabs linking to each blade's local console. Nice.
Backing up the UI is a set of redundant CMMs (Chassis Management Modules). Each module can be separately linked to the network via a single gigabit NIC and all share a common IP address, providing a fast fail-over in the event of hardware problems. The local ILOM card in each blade is also accessed via internal bridging to these Ethernet ports, so these links are very important to normal chassis operation.
The Sun Blade 8000 is a masterpiece of engineering and aesthetically attractive to boot. At $100,000 as tested, it's definitely not a low-cost solution, but its focus isn't on the low-end market. This is a system that begs for a heavy workload — and delivers.