Jeffrey Snover, a 16-year Microsoft veteran and brilliant technologist, clearly identifies with the IT operations side of the house. "That's my tribe," he says. "I'm very optimistic about their future."
Despite that tribal affiliation, Snover may be helping to automate many operations folks out of a job, though I'm sure he would dispute that suggestion.
A distinguished engineer and the lead architect for Microsoft's Enterprise Cloud Group, Snover has been working closely with Azure CTO Mark Russinovich to develop the automated infrastructure behind the Azure public cloud. That sophisticated technology is also winding its way into Windows Server, System Center, and other on-premises Microsoft solutions.
I spoke to Snover last week about Microsoft's approach to the software-defined data center, which he contrasts with the OpenStack approach -- one that requires expensive system integrators to deploy:
Nobody can deny that [Microsoft's] core competence has always been taking very high-end computing and democratizing it, making it available to the masses, and we do that by driving high volume and simplicity. We are doing that again with software-defined data centers.
Here, Snover is talking about private cloud deployments using Microsoft's Azure Stack solution. I get his point: Microsoft knows better than anyone how to productize the cloud, whether public, private, or hybrid. InfoWorld's own cloud reviews support that assertion. To go a step further, among the top three public cloud plays -- AWS, Azure, and Google Cloud -- Microsoft is the only one to offer a hybrid cloud solution.
But as bullish as Snover purports to be about the future of operations, he keeps undermining his own argument. First, he makes a historical analogy between advanced cloud automation and the plug-and-play specs Microsoft introduced with Windows 95:
Prior to plug-and-play, operators and administrators used to go around with their bent paperclips, poking DIP switches and working with I/O maps and all that horrible, horrible stuff. With plug-and-play, all of sudden you just took a device and you plugged it in and it just worked. Did anybody ever lose their job because of plug-and-play? I've never found anyone.
True enough. But while plug-and-play was a welcome advance, cloud computing is a quantum shift, with the end goal of enabling an entire data center to behave much like a single, fungible computer -- an almost infinitely scalable one if you're taking about the public cloud. The latter can never be true of the private cloud, though I have no doubt Microsoft's latest advances will make on-premises admins more productive.
Along those lines, Snover is particularly proud of Storage Spaces Direct, introduced in Windows Server 2016 Technical Preview 2, which enables admins to manage direct-attached storage with high availability and performance. Windows Server 2016 will usher in a host of other advances, too, led by new container technologies and the Service Fabric PaaS.
But again, all this stuff is descending from the Azure public cloud to the on-premises version, and the container stuff will enable developers to do many new and exciting things without the help of ops at all. More and more I wonder about the rationale for investing in the private cloud. Snover puts the decision this way:
The heart of the public cloud is elasticity, but the heart of the private cloud is control. These are two immutable truths. If you really care about control -- like I want to control who accesses these servers, what the bandwidth between components is, what's CPU and what's stepping on the CPU -- that's the kind of control [you get] when you purchase and control your own data center.
I asked Snover whether that sort of attention to detail was a little like admins flipping DIP switches prior to the advent of plug and play. He replied that, in the end, whether the majority of people opt for the public or private cloud "will be a lifestyle choice."
Interestingly, Snover doesn't think security will be a part of that choice. In fact, he likens worries about entrusting data in the public cloud to century-old fears about putting money in the bank as opposed to under a mattress.
So what, then, is the argument for going to the trouble and expense of maintaining a private cloud? There are regulatory imperatives regarding the location of data, certainly, and sensible trepidation about cloud lock-in and rising operating costs as opposed to the capital expense of doing it yourself. But ultimately, you can never have the same level of flexibility on premises as you can in the public cloud -- and I'm not just talking about scalability.
For example, 3D NAND could lower the price of flash memory to parity with spinning disk by next year, at which time spinning disk for tier 1 storage could become obsolete. You're talking monster increases in performance. Now imagine you just invested a few million bucks in some hybrid flash/disk storage today. You've just frozen yourself in time -- whereas you know your friendly public cloud provider will be upgrading pretty quick, in part due to the economies of scale inherent in flash's much lower power consumption.
One of the most powerful arguments for the public cloud I've seen came from a Microsoft whitepaper, "The Economics of the Cloud," published in 2010. A key passage stated that:
For large agencies with an installed base of approximately 1,000 servers, private clouds are feasible but come with a significant cost premium of about 10 times the cost of a public cloud for the same unit of service, due to the combined effect of scale, demand diversification, and multi-tenancy.
Of course, that's cost, not price -- one would imagine public cloud providers intend to make a profit selling their services. Plus, as the whitepaper states in its conclusion, the transition to the public cloud is "a delicate balancing act." It would be insane to go all in right away.
But increasingly, fancy private cloud solutions seem like a waystation on the road to our public cloud future, even those that vastly reduce the complexity of deployment and maintenance. In the software-defined world, where so much of that virtual infrastructure is automated, I don't see how we can possibly need nearly as many operations people to run things. That, after all, is from where much of the cost savings will come.