9 enterprise tech trends for 2016 and beyond

Enterprise technology development keeps racing ahead, and this year's forecast explains how most of it will be wrapped in the cloud

InfoWorld’s David Linthicum recently suggested it was time to retire the phase “cloud computing” and simply say “computing.” That’s how essential cloud has become -- and why for the past couple of years cloud has framed my annual attempt to identify the nine key enterprise tech trends going forward.

In 2015, it became a lot clearer what cloud infrastructure in all its scalable, self-service glory will be best for: running applications composed of microservices outfitted with RESTful APIs. Most likely those services will run in containers, which give developers more control than ever in building, testing, and deploying applications. Containers in turn support devops, where ops leverages new automation, instrumentation, and monitoring -- and devs take new responsibility for applications in production.

Sure, that’s one sort of cloud operation, and few have all those pieces in place today. But it's remarkable how quickly this shared vision of the future has coalesced.

Still, several elements of this picture need to be completed before your average enterprise can contemplate adoption. Which brings us to our first trend of 2016:

1. 'Cloud native' shapes the future

Applications built from microservices running in containers have all sorts of advantages over monolithic applications. First and foremost, instead of dealing with obscure internal dependencies that make troubleshooting and updating painful, you can work with decoupled services that are individually monitored and managed.

But microservices architecture adds complications -- mainly, swarms of containers to keep track of. And who manages literally billions of containers in production every day? Google, the company that in 2007 contributed the Linux kernel’s cgroups container feature on which Docker was later built.

Last year Google introduced the open source Kubernetes project, which distills Google’s container management system into open source bits so mere mortals can wrangle clusters of containers at scale. This summer the project’s founder, Craig McLuckie, announced the formation of the CNCF (Cloud Native Computing Foundation), which will take Kubernetes as a starting point to build out an ecosystem for container scheduling, management, and orchestration. Watch this space carefully.

2. Spark 'streaming' accelerates

A funny thing happened to big data in 2015: Spark elbowed Hadoop out of the spotlight. Why? Because rather than processing data in big batches across many disks, as Hadoop does, Spark works its magic with small batches in big memory -- close enough to real time to be indistinguishable from streaming. (Storm, a true streaming solution, has already fallen out of favor.)

Cloudera and IBM have gone all-in with Spark, while Amazon, Google, and Microsoft offer Spark as a service in their public clouds. But Spark still has major annoyances relating to memory management and resiliency, among other drawbacks. With this kind of momentum, though, you can expect many such problems to be addressed in the coming year.

3. Developers tap into machine learning

Not only do all the major clouds now offer analytics as a service, but they also provide machine learning APIs in the cloud; plus, open source machine learning tools abound. Ubiquitous machine learning capability enables developers to build applications that recognize patterns in gobs of data -- for fraud detection, face recognition, medical diagnoses, infrastructure optimization, Web ad-serving, you name it.

Of course, some commercial software and websites have had machine learning features for years (for anticipating user actions, recommending related products, and so on). The difference today is that machine learning is broken out as a separate capability any developer can exploit, and we now have tons of data and cloud computing capacity to throw at it, including fancy new servers equipped with GPU accelerators to run machine learning algorithms.

4. Cisco’s ACI reinvigorates SDN

The very idea of software-defined networking (SDN) suggests that, eventually, hardware switches will become commoditized, which is why SDN has been seen as an existential threat to Cisco. So far, however, SDN adoption has largely been confined to telco and cloud service providers and has had little impact on the enterprise.

Now Cisco has leaped ahead and introduced a new SDN scheme dubbed Application Centric Infrastructure (ACI) that includes a new operations control protocol, OpFlex, to replace OpenFlow. Designed for large-scale deployments, ACI pushes SDN in a new direction, distributing some of the control over configuration to the network and giving admins the ability to adjust settings at a high level based on application requirements.

Most surprising of all may be the degree of openness. ACI uses RESTful APIs, and Cisco has posted an open source SDK along with various ACI tools on GitHub. Also, Cisco has proposed OpFlex as an both an IETF standard and as an OpenDaylight project, and OpFlex already has the support of Microsoft, IBM, F5, Citrix, Red Hat, Canonical, and others. Considering Cisco’s huge enterprise market share, this could be the jumpstart SDN needs.

5. PaaS gets a second chance

As Andrew Oliver’s classic 2012 article “Which freaking PaaS should I use?” made clear, the first generation of PaaS was bedeviled by arbitrary limitations. As a result, enterprise PaaS adoption has been relatively weak. Lots has happened recently, though, including a rush to support Docker by the two leading on-premises PaaS offerings, Cloud Foundry and OpenShift.

I still believe that, on premises, many enterprises can benefit from running PaaS as a modern, scale-out, polyglot version of the good old application server. According to Martin Heller’s recent InfoWorld review, OpenShift Enterprise 3 has incorporated Docker containers without skipping a beat: “For both developers and operators, OpenShift fulfills the promise of PaaS.”

6. SSDs takes a bigger bite out of the data center

Already flash beats spinning disk in price-performance for IOPS-intensive applications such as VDI or high-performance databases, because equally performant spinning disk requires so many spindles to keep up. All-flash arrays and SSD-packed servers are no longer rare. Plus, everyone is excited about 3D NAND, which will offer much higher SSD capacities and performance.

A number of Internet giants have filled their data centers with SSDs for power conservation as well as performance reasons. Nonetheless, performance aside, the notion that flash will reach price-per-gigabyte parity with spinning disk will remain a pipe dream for years. Spinning disk has made its own advances, including helium drives and shingled magnetic recording, so SSDs won’t fully replace hard drives anytime soon.

7. The hybrid cloud gets real

I’ve had trouble with the phrase “hybrid cloud.” Was it supposed to mean integration between on-premises infrastructure and the public cloud? The integration of any two clouds? If it required a private cloud set up like a public cloud, well, so few of the former exist, the hybrid cloud might as well be mythical.

But Microsoft is changing that with its Azure Stack for Windows Server, which enables customers to at least partially duplicate Azure public cloud infrastructure locally. When Windows Server and System Center 2016 ship next year, integrating them with Azure promises to yield a true hybrid IaaS environment. Through its professional services, IBM appears to be working in the same direction with hybrid public and private cloud OpenStack deployments.

Amazon lacks a hybrid play (unless you consider some customers’ efforts to duplicate their local environments on AWS “hybrid”). Google doesn’t have a hybrid play either, but thanks to Kubernetes and the CNCF (see No. 1) it's only a matter of time. With the recent hire of VMware co-founder Diane Greene to head up Google Cloud, I’m finally convinced that Google is serious about serving enterprise cloud customers, and a hybrid Kubernetes (and more) scheme will be part of the deal.

8. Machine learning amps up security

You’ve probably heard about financial services companies using machine learning to detect fraud. But other security possibilities for machine learning abound, such as flagging network anomalies, tracking user behavior, or detecting zero-day malware.

Cylance, which recently partnered with Dell, provides a high-profile example: The company uses deep learning algorithms to detect a claimed 99 percent of malware. Yet buyer beware, because machine learning algorithms have been used in security applications for years with varying success and many false positives. Advances will keep unfolding thanks to big data analytics in the cloud, but expect incremental gains, not miracles.

9. Blockchain breaks out

Bitcoin has been tarnished many times over. But blockchain, the mathematical magic behind bitcoin, is on the verge of becoming a viable way to ensure the integrity of all kinds of transactions. In a recent feature article, InfoWorld’s Peter Wayner counted more than 100 companies exploring ways to extend blockchain for trading platforms, ID cards, contracts, secure storage, and more. Yes, even banks are testing it out -- an indication we’ll see blockchain go mainstream in 2016.