Behemoth vs. blade

A farm of blades seems like the best way to scale to meet application demands, but only when the software will cooperate

Generally, you can expect a giant to triumph over a platoon of munchkins. Maybe that’s why enterprises continue to favor hulking, eight-way servers over blades.

Blade servers never really took off the way everyone expected. The first products were plagued by heat and storage access problems, but those issues were eventually resolved -- or at least were brought under control. The new problem blade vendors face is defining market segments that can truly leverage clusters of small servers to perform a single task, particularly those running Windows. Blade server sales also suffer due to the ubiquitous presence of high-power, 1U rack-mount servers. The 1U servers may not match the ease of expansion provided by blades, but they certainly are cheaper, up to a point.

No one, however, can ignore the long-standing, sensible tradition of putting critical services on one enormous server and reaping the performance and reliability rewards. Of course, that choice may have already been made by your software vendor. Ultimately, the target application is the defining factor in this decision -- and that’s where most blade initiatives fall short of the mark.

If we’re talking about serving Web pages or providing Terminal Server or Citrix sessions, then the answer is simple: Blades or 1U servers are simply better suited to the task. Less risk of central failure, software design that lends itself to the farm concept, and definite metrics on load and service levels provided by a node make resource planning easy -- and the cost of individual servers is inexpensive. If it’s a compile farm you’re after, then a collection of smaller servers will generally provide more bang for the buck than a few huge servers. And clustering compile farms is relatively simple.

What’s missing here is true CPU virtualization. All the little servers in the farm model are usually running a clusterable application locally, communicating with a central file store to work with common data. True virtualization requires that a single application be run across a group of small nodes, not that each node is running concurrent versions of the same application.

Too bad, given that the price advantage clearly goes to blades. An eight-way, 7U server configured with eight 3GHz Xeon CPUs, 36GB drives, and 32GB of RAM goes for roughly $100,000, not including software licensing. The same CPU and RAM specs in eight two-CPU blade servers, including a full rack and blade enclosure, will only set you back about $65,000 for all the hardware. Of course, if you’re running a nonclusterable application, savings remain a dream -- you’re forced to shell out the big bucks for the big guns.

36FEbattle_ch4.gif
Therein lies the biggest problem with the adoption of blade servers for core infrastructure tasks: lack of software support at the high end. The concept of the datacenter as a malleable entity where computing resources are delegated from a pool of available processors to the applications that require them is a great concept. The hardware is ready, but the software simply isn’t, and won’t be until the major application vendors and even the smaller ISVs get pushed out of the habit of recommending behemoth servers to run their applications -- or when truly transparent clustering solutions are brought to market and supported by application vendors.

When clustering becomes the norm for applications, and abstraction of computing resources is as old hat as the CD-ROM, the proponents of the cluster approach will have won. And blades, or whatever they’ll be called by then, will triumph.

Mobile Security Insider: iOS vs. Android vs. BlackBerry vs. Windows Phone
Recommended
Join the discussion
Be the first to comment on this article. Our Commenting Policies