If you've read this blog for a while, it's no secret that I believe that one aspect of cloud computing is a dramatic drop in the cost of computing. While many discuss cloud computing's cost advantage in terms of better utilization via resource pooling and rapid elasticity, we believe that there is a more fundamental shift going on as data centers are redesigned to focus on scale, efficiency, and a shift to commodity components.
Put another way, the former cost advantage (utilization, etc.) relies on more efficient use of existing data center design patterns, while the latter relies on transforming the cost basis of data centers by creating new design patterns.
[ In the data center today, the action is in the private cloud. InfoWorld's experts take you through what you need to know to do it right in our "Private Cloud Deep Dive" PDF special report. | Also check out our "Cloud Security Deep Dive," our "Cloud Storage Deep Dive," and our "Cloud Services Deep Dive." ]
I wrote about this topic a few months ago in a post entitled "Are you making your data centers cloud-friendly?" In it I discussed trends evinced at the San Francisco DatacenterDynamics conference: energy efficiency, raised operating temperatures, and "chicken coop" data center building designs.
A couple of developments this past week reinforced the perspective that data centers are rapidly evolving to become mass scale computing environments. Over the past decade, data center design has been standardized as a collection of standard components plugged together. Each component has been designed to optimize its efficiency, with the expectation that, taken as a whole, optimum efficiency would be achieved. That view is shifting to one in which an entire data center is viewed as an integrated combination designed to run at the highest possible efficiency level, which requires custom-designing sub-components to ensure they contribute to the overall efficiency goal.
I identified aspects of this in the previous post. The "chicken coop" data center is designed as long rectangles with a long side facing the prevailing wind side, thereby allowing natural cooling. Facebook, in its open compute design, places air intakes and outputs on the second floor so that cool air can enter the building and drop on the machines, while hot air rises and is evacuated by large fans.
The two things that caught my eye this week related to server design and network equipment cost. The design item is about Facebook's custom server design and the implications for today's standardized blade or pizza box server economics. The network equipment item relates to Brocade's announcement that it will rent equipment for placement in cloud computing environments. Both of these align with the continuing shift of data centers to low-cost, high-scale environments, and both call into question the viability of established data center designs and economics.