Placing core business applications and data into the cloud doesn't really have a suitable backup plan unless you're maintaining local backups of all that data and can afford to bring the applications and data back online quickly during an outage -- but what's the point of leveraging a cloud if you have to run all that gear locally anyway just in case?
These issues aren't limited to failure and data loss. It's also security. Going back to the McAfee example, you might expect McAfee to have very stringent policies and procedures in place to thoroughly test and vet every DAT update it pushed out. You'd expect the company to have labs of hardware running the same operating system and service packs that its customers use to verify that the updates would do no harm.
You'd also expect that your cloud vendor would have teams of highly trained security professionals guarding your data. You'd expect it to constantly monitor threats internal and external, and employ cutting-edge technology to ensure that your assets are free from pilfering or destruction. You might be right. You might not. Unless or until there's a problem, you'll never really know.
Mistakes happen. They happen in your IT department, at vendors, at clients, everywhere. But when you have complete control over the assets you manage, you can employ suitable safeguards against the inevitable human error. If they're not sufficient, you didn't plan well enough, but at least you own the problem. If a third-party company falls down on the job and takes your data with them, your only failure was believing that you could safely farm out highly important data and applications and let them deal with it.
Call me paranoid, but that's simply not a risk I'm willing to take -- not yet, and maybe not ever.
This story, "McAfee's blunder and cloud computing's fatal flaw," was originally published at InfoWorld.com. Follow the latest developments in security and cloud computing, and read more of Paul Venezia's The Deep End blog at InfoWorld.com.