Microsoft Nano Server and the future of devops

Lightweight operating systems like Nano Server and CoreOS provide an important piece of the platform for the fully programmable cloud infrastructure of tomorrow

predictions crystalball
Thinkstock

If you're a programmer, why worry about infrastructure? After all, we're accustomed to simply writing code, moving it to a staging environment, and letting the operations team push out the code as a user-facing application.

But now things are different. We’re writing applications that need to be delivered quickly, operate flexibly, and take advantage of the capabilities of tomorrow’s software-defined data centers.

Microsoft’s recently announced Nano Server is an interesting reflection of this trend. Designed to offer a lightweight, easy-to-deploy base OS for hypervisors and containers, Nano Server can be thought of as a point where programmable infrastructures and operating systems meet. It can also be thought of as the Windows equivalent of CoreOS, a lightweight, stripped-down version of Linux that has become the preferred OS on which to run Docker containers.

While Nano Server might look like a thin Windows Server with all the extraneous UI removed, it's actually a platform for programmatic delivery of features, where you can use PowerShell’s Desired State Configuration (DSC) tools to deploy those features as needed.

There’s a synergy here with configuration management tools like Chef, whose "recipes" include details of the services needed by an application. With Nano Server, you'll be able to not only deploy the OS, but also configure and deploy the services a given application needs simply by triggering a DSC operation.

A combination of Nano Server and Chef offers an interesting option for operations teams because it pushes much of the responsibility for managing an application’s infrastructure to devops. Instead of having to manage multiple server images, an operations team needs a single base OS image that can be managed and patched, while devops teams handle the services and features that their apps -- and only their apps -- need.

If we’re going to deliver highly scalable, highly flexible cloud services to support applications, we’ll need to start thinking about how we manage this shift in responsibilities. Containerization is allowing us to abstract our applications from the operating system, while all the various virtualization techniques we’re using let us do the same for our infrastructure.

Most of the time we design applications for static infrastructures, where defined server instances host defined elements of our applications. The model works well for the familiar MVC and MVVM patterns, where we’re building applications that take data from endpoints, then process, format, and deliver it to another endpoint. But what of future Internet of things applications, where we need to process information from many thousands, if not millions of endpoints? None of those endpoints will be synchronized to any other, let alone to our servers.

It turns out that one of the oldest design patterns works well here: the actor. Intended to process and manage asynchronous messages, the actor pattern is at the heart of functional languages like Erlang and is used by many large-scale cloud AI systems -- as well as by distributed NoSQL databases such as Basho’s Riak (many of which are written in Erlang). Actors are powerful tools, and they’re inherently scalable: As you add new actors to a service, all you need to do is update any address books to deliver messages appropriately.

The actor pattern can be applied pretty broadly. You can easily view a Node.js switching element in a Node-red application as an actor, for example -- the same goes for a service triggered by AWS Lambda or an Azure Event Hub. (It’s the way tomorrow’s developers will think, because it’s also how Minecraft environments work!)

Actors are an important tool for the Internet of things, where you need to work with messages from any device at any time, converting events into streams or into actions, or simply storing them as data. Actors are also key to scaling your IoT applications and taking advantage of the way they can scale. It’s worth thinking of an actor as a microservice, with its messages as a thin and light version of an enterprise service bus.

Tying an actor to a container, then to a thin OS makes it easy to scale applications rapidly -- simplifying OS deployment, configuration, and operation. If you need a new service instance, you simply start a fresh container. While you can’t yet spawn new service instances in the milliseconds it takes for AWS Lambda to trigger an application from an event, you can front-load infrastructure, triggering new deployments as services run. Suspended virtual machines are computationally cheap, and it takes very little time to reconfigure a software-defined network to support new VMs, especially if Nano Server or CoreOS is the underlying OS.

Working with programmable infrastructure is, as agile development principles suggest, best thought of as a first principle -- not an item to retrofit after you’ve delivered your code and users are already working with an application. Understanding how your application and its infrastructure interact has to be part of any development process, so you’re ready to roll as soon as you press the deploy button.

Bringing development and operations together is key to successful delivery of microservices based around scalable design patterns, especially on a new generation of OSes optimized for working with containers. That means development teams need to have a deeply embedded devops presence, so the infrastructure and the tools used to manage it are part of everything you do.

Copyright © 2015 IDG Communications, Inc.