Rackspace learned that lesson a few times in 2009. The cloud provider suffered four high-profile failures throughout the year, adding up to hours of offline time for the company's customers. One blip was bad enough that Rackspace had to pay out nearly $3 million in service credits to its users.
Rackspace called the incidents "painful and very disappointing" and promised to "execute at a high level for a long time" after. Today, the company continues to focus on uptime but also works to help users plan for the inevitable turbulence that comes with life in the cloud.
"If you want to cluster a server or build geographical redundancy, it's easier to do now than it ever was before, but you have to actually take those steps," says Rackspace's Lew Moorman. "The cloud doesn't bring inherent weaknesses that weren't present if you did things in-house before."
All considered, the biggest lesson here may be that no single server, center, or service is 100 percent reliable. If you don't build your business with that in mind -- well, my friend, you're just walking around with your head in the cloud.
Related articles
- Cloud development: 9 gotchas to know before you jump in
- How to integrate with the cloud
- Download: Cloud Computing Deep Dive Report
- Download: Cloud Security Deep Dive Report
- What cloud computing really means
This article, "The 10 worst cloud outages (and what we can learn from them)," originally appeared at InfoWorld.com. Track the latest developments in cloud computing at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.