Of grids and clouds

Ian Foster (part of the founding intellect in the concept of grids in HPC) has an interesting post on his blog this week where he looks at the trends of grid and cloud computing, and hazards a few predictions for the future.

The Big Idea behind grid computing is delivery of computing power on demand — in the sense that electricity is delivered on demand — to consumers.

So is “cloud computing” just a new name for grid? In information technology, where technology scales by an order of magnitude, and in the process reinvents itself, every five years, there is no straightforward answer to such questions.

Yes: the vision is the same—to reduce the cost of computing, increase reliability, and increase flexibility by transforming computers from something that we buy and operate ourselves to something that is operated by a third party.

But no: things are different now than they were 10 years ago. We have a new need to analyze massive data, thus motivating greatly increased demand for computing. Having realized the benefits of moving from mainframes to commodity clusters, we find that those clusters are darn expensive to operate. We have low-cost virtualization. And, above all, we have multiple billions of dollars being spent by the likes of Amazon, Google, and Microsoft to create real commercial grids containing hundreds of thousands of computers.

This directly addresses the (obviously not original to me) idea I raised on my own blog a few weeks ago on future alternate delivery models for HPC; Chris Aycock’s recent essay there also addresses these issues.

I do believe that much of HPC will indeed become totally commoditized. Just as some commercial consumers of electricity have mission requirements that necessitate they generate their own power (for example, Walt Disney World), some consumers of computational resources will continue to need to provision their own supers. But I am coming to believe that this will eventually be as rare as organizations that generate their own electricity today, even among institutions who currently consider themselves the unassailable elite of the HPC business.

Professor Foster goes on to predict what this future might look like: mixed large-scale provision by dedicated specialist providers along side microproduction by local resources, tools for managing ginormo-scale resources, and protocols for payment, service discovery, job management, and interoperation. And, of course, more robust tools for the expression of parallel work.

It's a good post; I recommend you read it.

From CIO: 8 Free Online Courses to Grow Your Tech Skills
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.