I've been pretty cynical about the cloud and the relentless marketing drumbeat behind it. But I have to admit that the migration to the cloud is happening at a pace faster than I thought possible. Microsoft's full-on cloud mentality with Windows 8 provides only the most dramatic recent evidence.
But I'm not sure that we've spent enough time thinking through the implications.
[ Also on InfoWorld.com: Read Paul Venezia's classic, "Nine traits of the veteran Unix admin." | Or see if you qualify for the title of certified IT ninja. ]
The upside of using a public cloud service is easy to understand. No need for expensive local storage, no need for local servers, a reduction in power and cooling expenses ... and a reduction in IT staffing. When all you need to do is click around on a self-service portal to spin up new server instances with your provider, you don't need to worry about racking boxes or even managing your own VMs. You let "them" handle all of that. What could be better?
Good question. In large part, the answer depends on data speeds, latency, and availability. These days, many more urban locations have fiber out the wazoo, and you can get 100Mb and even gigabit data circuits for less than what a T1 costs. With the expansion and interconnection of these networks, latency between offices and service providers may be just slightly higher than between local LAN segments, making the cloud provider seem to be present in your building, not 500 miles away. That's the core value propostion: A public cloud service has to look and feel like a local resource to succeed.
But guess what? Those high-end pipes aren't available everywhere, and without them, the value prop begins to erode.
Then there's the vastly more important question of availability. Sure, the latency between your offices and your cloud provider may be just 10ms or so, but what happens when some jackass with a backhoe in the next state makes that latency infinite? Suddenly you may have hundreds of employees with literally nothing to do. Today, loss of Internet connectivity still allows employees to work on local servers, access files on local storage, or in some cases, continue using virtual desktops served by a virtualization cluster in the backroom. If all of those services are on the other end of a severed fiber link, then everything comes crashing to a halt.
So then the case is made for redundant data connections. All well and good, but generally speaking, they're useful only if the routes diverge enough to skirt centralized data connectivity problems. Two separate paths out of the building may allow for continued service if that backhoe is in your parking lot or down the street, but what if it's near the aggregation point of your local fiber loop? And what about the potential ripple effect of natural disaters? We've had a few of those lately.
Today, the impact of natural disasters tends to be isolated to those immediately affected. If there's a wildfire that consumes a few businesses on one side of the county, businesses on the other side remain operational. If there's a hurricane and flooding, some companies may suffer the loss of buildings -- even data centers -- while other employers are unaffected. In a world where there are no local IT resources, where everything is handled via the cloud, a major disruption to a large cloud facility could take out businesses all over the place.
Just build a better cloud data center, you say? Well, you can outfit a building with all the fire suppression, drainage, and elemental protection devices you want, but if it's hit with an 8.5-magnitude earthquake, it'll be a long, long time before that facility gets back on its feet. In fact, it may never recover. Sure, cloud providers have disaster recovery plans, but rarely do they consider the wholesale destruction of an entire facility, and it's not economically feasible to provide 1-to-1 cold- or warm-site resources for every customer. Whether the downtime is measured in days, weeks, or months is dictated solely by the breadth of the disaster and the recovery plan of the cloud provider.
How many business customers that have ceded their IT infrastructure to a cloud provider could survive a month or more without access to their data, inventory, bookkeeping, and possibly even manufacturing control systems? What good is a call center that can neither answer calls or even look up customer records?
The time will come when a major cloud provider takes it in the shorts -- and that localized disaster ripples down to thousands of customers, wreaking chaos that surpasses the destruction of even the largest hurricane. Imagine companies in Boise going out of business because of a major earthquake in California. Or Virginia.
Cloud computing is enticing -- but it also represents a more radical departue than most people acknowledge. We either do it the right (and more expensive) way or we risk distributing localized problems far and wide. After all, if you decide to go "all in" with the cloud, you're not just trusting your cloud providers with your data, you're trusting them with the future of your company.
This story, "The cloud hazard no one talks about," was originally published at InfoWorld.com. Read more of Paul Venezia's The Deep End blog at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.