Z Systems for your private cloud! Now we're talking.
If that seemed like a laugh line, IBM is taking the idea seriously: It recently launched a series of Power and Z System servers for private and hybrid cloud applications.
This has become a strategy common to old-school IT firms. Want to avoid getting steamrolled by the rush to public cloud? Then build, sell, and support preconfigured, dedicated hardware for on-prem private use that also works as a gateway to the hybrid and public cloud worlds beyond.
Big iron, big outlay
This approach isn't limited to hardware vendors, either. It's become a staple element of the lineup for any "dinosaur" IT firm. HP Enterprise has its Synergy line. Oracle is ginning up its Cloud@customer to debut by the end of the year. Dell and Lenovo are buddying up with Microsoft to deliver Azure Stack in an on-prem form with dedicated hardware, though it won't show up until next year.
For Microsoft, it's part of the long game to let the desktop and the on-prem server become sideshows to the main event of cloud computing and managed services.
In IBM's case, it's not that different: Keep alive the customer revenue streams for those who invested before in Power and Z System architectures, let them plug into IBM Cloud through those systems, and give them ports of common enterprise open source apps (Ubuntu OpenStack, Hortonworks Hadoop, Nginx). But most important, give the customers yet another reason to not migrate those workloads elsewhere.
Under all this is another question that doesn't get asked enough. Why spend all that money -- the setups are costly, no question -- for a local duplication of functionality already available in the public cloud? Wasn't one of the points of the cloud that we wouldn't have to rely on any particular architecture to get work done?
Why keep it close to the vest?
Argument No. 1 is that keeping servers local also keeps data local, typically for security purposes. But on-prem data residency is less of a guarantee for data security than claimed. Cloud providers may be safer than on-prem datacenters at this point, in big part because they have more to to lose with security lapses.
Argument No. 2 is that local big iron is about running huge, nonstop applications where the operational expenses in the public cloud will be brutally high. This is a chunk of IBM's argument for Power and Z -- hence, the touted beefiness of those systems.
But the capital expenses that come with on-prem hardware, especially anything of this caliber, are nothing to sneeze at. If you never recoup that outlay, you're stuck. Now that the tools for decomposing monolithic applications into microservices are becoming commodities, the problem is more that software needs reworking than hardware needs upgrading.
The other argument is that as cloud services take off, the capex associated with them will also start to mount, and the cloud will become what it was trying to replace, so you're better off staying where you are. But that's less an argument against the cloud than it is against a business model where healthy competition between parties is likely to keep costs down.
Forward into the past
When IBM shed its legacy x86 server business to Lenovo, it was easy to interpret that as a sign the company was leaving hardware behind. In reality, IBM was leaving behind low-margin, commodity server hardware and turning to name-brand, high-end servers with exclusive designs: Power for OpenStack and other open source stacks, and Z Systems for mainframe customers. IBM is also determined to keep the Z line relevant by porting modern-day software tools to it, such as Google's Go language.
This smacks less of a strategy to bring time-tested power to a wider audience, and more of a plan to keep an existing product line alive for those already sunk into it. Given that legacy mainframes and custom processor architectures are precisely the kinds of scenarios the public and hybrid clouds could provide alternatives to, it's ironic for IBM to go this route. Forward, into the past?