“We go through a lot of presales rigor with the customer,” says Victor Mashayekhi, senior engineering manager for high-performance clustering at Dell. “We’ll run their codes and make the performance results available, we’ll install the images, and we’ll merge all the hardware pieces into racks, cable them up, and ship the racks to them.”
Hardware vendors also often integrate parallel cluster file systems, from vendors such as HP, Ibrix, Lustre, and PolyServe, which are necessary to make HPC work.
Because of the relative difficulty in getting started, however, while the number of applications for HPC in the enterprise is growing, you’ll still find it primarily in the more technical departments. You’re also much more likely to find clusters of servers in the tens, or less than 10, rather than in the hundreds or thousands.
“You’ll find HPC in financial services departments running actuarial workloads and trading analysis, or in engineering design and manufacturing,” says Gillett.
“We see HPC doing things like airline route scheduling to fill seats, and in the trucking industry to maximize the use of their fleets,” adds Dave Turek, Vice President for Deep Computing at IBM, noting that industrial design, digital content creation, and gaming are also strong markets.
Trickling Into the Mainstream
Two recent developments hold some promise for pushing HPC more into the mainstream, however. The first is the entry of Microsoft into the HPC market in the first half of 2006 with Windows Compute Cluster Server 2003.
Microsoft is aiming squarely at the applications that now rely on Linux HPC solutions, by partnering with classic HPC application vendors such as Accelrys, MathWorks, Schlumberger, and Wolfram Research, who plan to build Windows versions of their HPC applications.
“It wouldn’t be too difficult for a biologist to set up a small Windows Compute Cluster of servers in his office rather than having to go to the organization’s ‘high priest of clustering’,” says Jeff Price, senior director for the Windows server group at Microsoft.
Northrop Grumman has already been testing Windows Compute Cluster Server 2003 on an 18-node cluster of dual Opteron servers to analyze huge volumes of satellite data simulating the detection of ballistic missile launches. “It integrates easily with our current Windows infrastructure,” says Andrew Kaperonis, a systems/simulation engineer at Grumman.
The second exciting development is the movement toward SOA (service-oriented architecture). Because SOA is inherently componentized, SOA application workloads are easier to distribute across a clustered environment.
“SOA is all about abstracting away the fundamental plumbing, messaging, multithreading, execution environment in a container done once so the application developer can just focus on writing the application logic,” says Platform Computing’s Songian Zhou. “SOA will make grid computing easier and grids will be a must for successful SOA.”
Today, however, there remain significant challenges to building and managing a viable high-performance computing implementation and, particularly, finding or modifying the software to run on it effectively. HPC is still best suited to highly technical, processing-intensive applications with specific characteristics (see “A first look at Windows Compute Cluster Server”), and with extensive help from software and hardware vendors that can deliver a complete solution.
As a growing number of enterprises begin to see the advantages of cluster and grid computing, however, they will undoubtedly work their way into other mainstream areas.
“We’re seeing more and more instances of clusters and grids acquired for something like bioinformatics or financial calculations but then partitioned off for payroll and logistics,” says IBM’s Dave Turek. The combination of more widespread use, easier Windows-based clustering, and SOA may indeed one day make high-performance clustering and grid computing a fairly mainstream enterprise application.