Dell's newest is a chip off the old blade
Second-generation PowerEdge 1855 blade system excels at networking optionsFollow @infoworld
Blade servers are evolving at a feverish pace, and Dell’s PowerEdge 1855 system shows just how far the technology has come.
Previous blade systems I’ve seen suffered from severe restrictions and trade-offs, mainly in regard to expandability, connectivity, and ease of use. Dell does a respectable job addressing those trade-offs in its second-generation blade platform -- particularly network communications.
The Xeon-driven PowerEdge 1855 -- which replaces the Pentium III-based PowerEdge 1655MC -- is similar to other modern blades. It has a 7U chassis designed to fit into a standard rack. You can slide as many as 10 dual-processor servers into the front of each chassis.
In the back are field-replaceable power supplies, fans, and networking connections. The blades and the network infrastructure are linked by a passive midplane that generally doesn’t require servicing. A KVM switch and a management processor are also built into each chassis.
A key consideration for many blade systems is density, and here Dell comes up short. The PowerEdge 1855 system handles as many as six chassis, or 60 servers, in a standard 42U-high rack. This density is noticeably lower than with other dual-processor systems: RLX’s 600ex system handles 70 servers per rack, IBM’s BladeCenter 84 servers, and Hewlett-Packard’s BL20p system 96 servers.
Along with their higher densities, several manufacturers (including HP and IBM) offer the option of installing double-width quad-processor blades. Dell doesn’t, and according to the product manager, the PowerEdge 1855 platform won’t support that option in the future.
Processor choice is also disappointing: Dell offers only Intel Xeon chip processors in the PowerEdge 1855. Other vendors offer more -- for example, HP offers Xeon and AMD Opteron processors in its systems; IBM can mix-and-match Power and Xeon; Sun offers Xeon, Opteron, and UltraSparc chips.
Where Dell does offer more choice is in the back of the chassis. There are four bays for hot-swap power supplies, and each bay has integrated fans and two separate cooling modules. There are also four bays for communications modules, two of which are dedicated to a group of five servers. Each server group uses one communications bay for a Gigabit Ethernet pass-through or switch. The second bay is for optional daughter cards installed into the server, and can have either switches or pass-throughs.
There’s also a removable management module that contains the KVM switch’s connectors and an Ethernet management jack. The management module can be deployed in a redundant pair. However, the communications modules can’t be made redundant, short of duplicating connections via the daughter cards.
The system I tested had only three servers installed, each of which had an FC (Fibre Channel) daughter card. The second communications bay had an FC pass-through (you can also place a second Gigabit Ethernet switch in that bay). According to Dell, a Brocade FC switch and InfiniBand pass-through will ship in March.
The server blades are well-designed. Each of the three I received contained dual 3.6GHz Xeon processors, 1GB RAM (expandable to 12GB, or 16GB with higher-density DIMMs), one onboard Gigabit Ethernet interface, and dual 73GB Ultra320 SCSI hard drives. All three servers were running Windows Server 2003, but Dell supports several Linux distributions.