A modern, global enterprise is incredibly complex. Balancing materials availability forecasts with predicted sales trends and seasonal marketing strategies can seem like pure wizardry. But what if you had some help, in the form of a massive electronic brain that could handle the number-crunching for you?
Until recently, supercomputers were the exclusive domain of large universities and government research labs. Massive, arcane, and impossibly expensive, they required operational and maintenance skills far beyond the capabilities of your average enterprise IT department. But new developments in HPC (high-performance computing) technology are putting supercomputer-level performance within the enterprise's reach. The only question is, does the enterprise have use for it?
The HPC field has changed dramatically over the last decade. Today, distributed-processing software allows even desktop PCs to join compute clusters and crunch numbers in their idle moments. Networked parallel processing technology makes it possible to build supercomputer-class systems from mainstream, off-the-shelf hardware and open source software. And in the past few years, companies such as IBM and Sun Microsystems have begun offering time-shared HPC services at affordable rates.
This is great news for the oil and gas, finance, and insurance industries, which have long relied on HPC for intensive calculations and complex mathematical modeling. But for more typical enterprises, supercomputing technology remains a tough sell. The promise is enticing, but the hurdles to overcome call into question the number of businesses that realistically need to perform calculations on the order of those necessary to predict global weather patterns or model the stock market.
And cost is not the only barrier to entry for HPC. Before any massively parallel supercomputing application can run, it first needs a data set to process. As any IT manager can attest, enterprise data is too often scattered throughout multiple, disparate systems, each with its own interface and data formats. As the growing market for data integration and SOA (service-oriented architecture) technology attests, unifying this data is no easy task. Relying on it for serious computational modeling is out of the question.
So, while raw processing power may be available and affordable like never before, don’t expect HPC to become a line item on your budget anytime soon. For most enterprise IT departments, those dollars will be better spent on traditional expenditures such as middleware and data warehousing, leaving mass-market supercomputing relegated to the category of the possible, but impractical.
-- Neil McAllister
Related articles
Crackpot tech 2008: Crackpot technologies that could shake up IT
Eight more technologies that straddle the divide between harebrained and brilliant -- each with a promise to transform the future of the enterprise
Crackpot tech 2007: 12 crackpot tech ideas that could transform the enterprise
These technologies straddle the divide between harebrained and brilliant as they promise to shake the pillars of tomorrow's enterprise