To understand the conflict between commercial cloud providers and internal IT, you need look no further than Jiffy Lube and its impact on traditional automobile repair services. What Jiffy Lube did was cherry-pick the types of service that are most frequent and that require the smallest investments in equipment, parts, and training, offering them for much less than their full-service competitors could manage, given their much higher overhead.
Sounds cloudish, doesn't it? Because what cloud providers don't have to concern themselves with is IT's central challenge: integration. The integration coin and its flip side -- maintaining a coherent enterprise technical architecture -- are what correspond to the extra equipment, parts, and training that full-service mechanics have to support with higher prices than Jiffy Lube has to charge for an oil change, Midas has to charge for mufflers, and Tires Plus has to charge for those rubber things that wrap around your wheels.
[ Find out the 10 business skills every IT pro must master, beware the 9 warning signs of bad IT architecture, and steer clear of the 12 "best practices" IT should avoid at all costs. | For more of Bob Lewis' continuing IT management wisdom, check out his Advice Line newsletter. ]
IT disintegration all over again
The Jiffy-Lubeness of going outside IT isn't new. Long before the cloud raised its foggy head, business departments, frustrated by their inability to get projects approved by the IT steering committee on which they sat, chose the Jiffy Lube alternative by contracting with an outside developer to build or install what they wanted. It wasn't uncommon for the new system to be built with development tools that hadn't appeared in the company before and to run on hardware and an operating system that were new to the data center -- assuming they were placed in the data center and not in someone's work cubicle.
Business departments, lacking IT expertise, often neglected to mention any need to integrate the new system into anything else in the enterprise in their requirements and, therefore, in the contracts they signed. That occurred to them much later, close to deployment. That's when the developer asked what the procedure was to place the new servers in the data center and to connect them to the company's network. It's also when the development team started to mention the data flows into and out of the system, as well as asking about the in-house systems that were to provide and accept these data flows.
I'm not being entirely fair to outside developers. In many cases they suggested involving IT much earlier in the process. Between the department head not wanting to raise any red flags before it had to and knowing IT had no spare capacity to devote to the project anyway, IT involvement kept being deferred until there was no way to delay it any longer.
IT did its best, scratching together just enough staff time to make sure the new technology could physically co-exist with production systems without blowing anything up or creating unacceptable security holes; wrote ad hoc batch file extracts and transaction loaders to handle the new data flows; and otherwise got the blame for delaying a project that had otherwise met all of its milestones.
Then, the outside developer went away, leaving IT to maintain the mess -- which is why I was able to predict with such confidence last week that when it comes to NoSQL, the exact same thing will happen.