Appro Hyperblade Mini-Cluster proves petite yet power-packed

Appro crams a lot of horsepower in a 3-foot-high box

With the advent of load-balancing clusters for Web servers and of Linux-based clusters for scientific computing and database clustering, many rack-mount servers are advertised based on the number of servers or processors that will fit in a 7-foot rack. For companies that need a fair number of servers but don't have room for a full rack, Appro International offers the Hyperblade Mini-Cluster.

The Appro Hyperblade Mini-Cluster is an inexpensive alternative to blade servers. It's more compact and easier to manage than the equivalent number of 1U servers. The blades retain the complete functionality of stand-alone servers, with nice hardware management and monitoring.

The cabinet measures slightly more than 33 inches tall, 19 inches wide, and about 32 inches deep. It houses power supplies for the blades, a master blade for management and control, a floppy and CD-ROM shared by all the blades, and room for two industry-standard, 1U rack-mount Ethernet switches. The cabinet can be stacked two high, and for further expansion, the Hyperblade cluster mounts 80 of the same blades in a 7-foot rack.

Despite the difference in size, each blade offers the same capabilities as any standard 1U server, including a PCI slot for additional connectivity or SAN HBAs. Each blade supports one or two Xeon (as much as 2.8GHz) or Opteron (as much as 246) processors (Itanium support is coming soon). Six DIMM slots support as much as 12GB RAM. One or two ATA hard drives can be used, and two 10/100/1000 Ethernet ports are provided, along with the usual keyboard, mouse, and video ports, two USB 2.0 ports, a proprietary controller port, and a 64-bit/133MHz PCI slot.

Power for the blades is handled by dual power supplies which can be run at 110 volts or 220 volts, although only four blades per power supply are supported at 110 volts. One ding: The power supplies are not redundant; each supplies half the blades in the chassis.

The master blade is built into the cabinet, and provides hardware-level access to the blades. It can be replaced with a KVM switch, keyboard, mouse, and LCD monitor if desired.

The master blade provides KVM access to each blade through a browser interface, as well as hardware management for reboot, shutdown, power on/off, and hard reset. It also monitors cabinet temperature, fan speed, CPU, and motherboard temperatures for each blade. It can send e-mail to the administrator if parameters are exceeded. A DHCP server is available on the master node; IP addresses of the blades can be dynamically set, or can be static.

In my test, the blades were initially loaded with SuSE Linux, but we loaded Windows 2000 Server for test purposes as well. Installing an OS on the blades was no more difficult or complicated than with a stand-alone 1U server, with no additional or custom drivers required for smooth operation.

Performance was on a par with the last 1U dual Opteron server I tested, which also had ATA drives. Monitoring the blades through the BladeDome software was simple and straightforward, and the nicely integrated chassis was much easier to deal with than 16 individual 1U servers.

Software provisioning is not currently provided, but Appro will work with customers to provide open source provisioning tools. According to Appro, 99 percent of what it sells is SuSE 64-bit Linux, with 1 percent Windows, but Windows 2000 Server installs easily on the blade.

For database clusters, an integrated Infiniband option is coming, and Appro supports several clustering interconnect standards including Myrinet, Dolphin, Quadrics, and Infiniband through PCI adapters.

Organizations that need to deploy fewer than 16 servers in a clustered environment should look into this solution. Given the easy expansion to multiple units, it's also a good fit for businesses that must start small and build on a standard platform. Unless you need automatic provisioning, it’s a good fit anywhere.

Join the discussion
Be the first to comment on this article. Our Commenting Policies