Managing containerized applications at scale is a new kind of challenge, especially if you’re planning on automating as much of the operations as possible. There is a fundamental disconnect between containers and the underlying infrastructure of our datacenters, one that makes it difficult to map containers onto the available physical and virtual resources. That’s where datacenter-scale tools such as Kubernetes come into play, providing an essential new management layer to control how and where our containers run.
Originally developed and open sourced by a team at Google, the Kubernetes project is now managed by the independent Cloud Native Computing Foundation. Kubernetes is available on all of the major public cloud platforms including Azure. Perhaps best thought of as a datacenter operating system, Kubernetes monitors the resources used by a containerized application and deploys its elements on the underlying infrastructure to ensure that services operate correctly, managing the mapping between the requirements of the containers and the capabilities of the underlying infrastructure.
Microsoft’s Kubernetes implementation is now part of Azure’s Container Service, accessed through the latest release of the Azure CLI. Using the Azure command line makes sense, as much of Kubernetes (and Docker) is driven by familiar command line tooling. Although the Azure CLI runs on a desktop PC, there are other options. Like the Windows Bash shell, Azure’s CloudShell gives you a Bash prompt in your browser or in the Azure client running on iOS and Android devices.
Azure Container Service offers three different approaches to container management and orchestration. In addition to Kubernetes, Azure supports Mesosphere’s DC/OS and Docker’s Swarm and Compose, working with both Linux and Windows containers. It also provides standard API endpoints for those tools, so you can integrate them with existing continuous integration toolchains and with other container management tools.
Start with an Azure resource group
Building a Kubernetes-managed cluster is straightforward. First, set up an Azure resource group for your cluster. This defines where your cluster is hosted in Azure and gives you a namespace for all your commands. Creating the cluster in the namespace entails choosing an orchestrator (in this case Kubernetes), giving the cluster a name, and generating the SSH keys used to securely connect to Kubernetes to manage your containers.
Once a cluster is up and running on Azure, you can use Kubernetes’ command line client to manage it from your desktop or from the Azure CloudShell. You can then use it to see the current state of your Kubernetes cluster, before deploying any containers – either from the Docker Hub repository or from your own systems. Once an image is in place, additional Kubernetes commands can be used to set up networking services before exposing your application to the outside world.
Kubernetes uses manifest files to deploy and manage objects. These are written in YAML, so they’re easy to edit in your usual text editor. One useful aspect of the Azure implementation of Kubernetes is that Azure-specific plug-ins are already installed, making it easy to manage Azure resources such as storage, with YAML descriptions for the services you’re using in standard Kubernetes manifests.
Other manifest files contain details of the service containers you’re deploying, and, once in place, a single command line creates the Kubernetes pod that hosts your application. The Kubernetes command line shows what’s running, and manages networking for you.
Datacenter automation tools such as Kubernetes are an important part of building modern applications, as they can handle scaling up and down as required. Scaling your pods is relatively straightforward. You can use the command line to manually scale the number of replicas in use or set rules that automatically add more pods based on CPU limits. For example, you can build a rule in your service YAML that requests CPU resources, with limits. Then you can use the Kubernetes command line to autoscale your deployment, with a minimum and maximum number of deployed pods.
Kubernetes isn’t hard to use, but getting the most out of it requires more than just setting up a cluster and deploying and scaling pods. You need to think about the resources you want to use and how you want them to drive scaling. It’s also a good idea to connect your Kubernetes deployment to other Azure services, such as the Operations Management Suite, with OMS agents running on your Kubernetes cluster, delivering container diagnostics to your Azure dashboard.
Kubernetes companions for developers
Microsoft recently acquired the Deis team from Engine Yard to add more Kubernetes tooling to Azure, while still supporting open source efforts. Deis’ offerings – Workflow, Helm, and Steward – have now been joined by Draft, a tool to speed up application development on Kubernetes. Instead of focusing on the deployment of complete apps, Draft provides developers with a way to quickly construct a Kubernetes-ready set of containers, with a sandbox deployment and a link to familiar version control systems.
Built to work with Deis’ other Kubernetes tools and with Docker, Draft supports familiar programming languages and environments, including microservice favorites Python and Node.js. Creating a Draft description of an app is easy enough, by taking existing assets and packaging them in Docker with an easy-to-edit set of configuration files. The resulting set of packages is uploaded to a Kubernetes host for testing, and any changes you make on your development system can be deployed to the host as you make them. It’s an approach that makes getting started with Kubernetes simple, and it’s ready to use with Azure’s Container Service.
Brendan Burns, one of the founders of the Kubernetes project, now works on the Azure team. In a recent conference presentation he described tools like Kubernetes as the foundation of a new generation of PaaS. It’s a way of thinking that makes sense. By using containers as a way of hosting microservices, with Kubernetes as an adaptive foundation to handle scaling, developers are freed from having to know about the underlying infrastructure. All they need is their code and a few rules to set limits on the resources their application can use.
We can see elements of this approach in the latest versions of Azure’s Service Fabric PaaS, which mixes a hosted microservice development environment with support for Docker containers. All we need to think about is the code; we don’t need to consider the underlying infrastructure at all. With data center operating systems like Kubernetes, infrastructure should be irrelevant.
Microsoft’s support for Kubernetes in Azure is a sign of the direction the company is taking its cloud platform, and a reason for us all to rethink how we’re building and deploying virtual infrastructures and applications. There is a lot to be said for letting tools like Kubernetes do much of that work for you, handling the servers that host your code and managing scaling for you. All you’ll need to do is take your code and use tools such as Draft to quickly build out your Kubernetes infrastructure, tying it to your existing development toolchain and processes.