What Kubernetes can do for container orchestration

With Microsoft and Amazon joining Cloud Native Computing Foundation at the highest level, Kubernetes, the open source container orchestration product is becoming a de facto standard to manage containerized environments

containers port ship boat
Jim Bahn (CC BY 2.0)

Kubernetes is an open source container orchestration system. It is powerful and runs Docker containers and makes sure they are running continuously. It also makes sure that if a container crashes or a host crashes then the container restarts like VMware HA but in a different way. Kubernetes supports a lot of different cloud providers and bare-metal deployments where you have bare-bones Linux OS underneath and you run containers on top.

Why Kubernetes?

Managing containers on a large scale or container management at scale. Kubernetes is Greek for pilot or helmsman of a ship. The basic purpose of Kubernetes is to make sure that you can manage all your containers at scale. Containers make application packaging and deployment process much easier, managing those container at a very high scale require significant effort. The basic function of Kubernetes is to make sure we can manage a cluster of containers as a single system, this is to make sure that we have a proper and easy development and to simplify the operations of managing all these containers. This approach helps the development teams to code the app into a container and hand over to Ops and Ops can run those containers.

When we look at containers—it’s essentially a way to encapsulate an application, making it easy to run the application anywhere. However, operations team need management systems to manage those containers. Kubernetes handles large container environments.

The core components

A container is a sealed application software packaged into standard units for development, shipment and deployment. A container image contains the code, run-time libraries, dependencies, configurations—everything that is needed to run it as a lightweight package. Developers can pack their code and its dependencies into a container object that can then run it anywhere, on any environment—and because they are usually small objects, a lot of containers can be pushed onto a single machine. Docker is among the most popular container in today’s world which is about more than 4 years old but original container technology is from the beginning of Unix era.

Pods are small group of containers working together. A pod is the smallest deploy-able unit that can be created, scheduled and managed with Kubernetes.

If you want to run multiple containers—let’s say four copies of a specific web server and a database for instance—Kubernetes has a technology to support this called a replication controller that makes sure that exact number of pods are running for specific service that you want to run.

One fundamental premise behind Kubernetes is that we can enforce something called “desired state management.” What it really means is that we can feed the cluster services a specific configuration and it will be up to the cluster services to go out and run that configuration inside the infrastructure.

If you configure k8s (short for Kubernetes) to have four pods running in a physical host and for some reason if some pod crashes then k8s will automatically start up a new pod on anyone of available cluster nodes to make sure you have desired state of four copies available.

The API is one of the main components to enforce desired state management as it sits in front of cluster services. This is one key building block of the system. The second key building block of the system is something called a worker. A worker in really is just a container host. The workers also act as Kubelets which communicates with k8s cluster services.

Services are a set of pods that work together. It is a logical construct having pods with containers inside. Services tie pods together like the web server farm for instance, several of the pods (web server/content repository) make a service. You can also consider load-balancer back ends as example of services. Usually you will notice that separate independent services require more than a single container—they normally require a group of co-located containers, one for the core functionality and some more to support activities like analytics, monitoring, database services, logging, etc.

Labels are used to logically construct and organize a group of objects. Labels are used to search for objects in different logical or physicals.

Basic constructs in a containerized environment:

  • Containers that run on clusters
  • Pods: containers that work together
  • Services: pods that work together
  • Labels: used to organize services

Kubernetes promises to bring cloud native fundamental components together locally and provide capability to scale apps to an enterprise level. With broad support of containers, Kubernetes provides more flexibility to choose containers based on what your application requires.

With a wide spectrum of programming languages and frameworks support like Java, Go, .Net, etc. Kubernetes is already getting lots of support from development community. Kubernetes has in-built support for various databases, ETL systems, and big data analytics.

Clusters can essentially be installed and run anywhere. Kubernetes is completely vendor agnostic and is supported by several cloud providers like Google Compute Engine, Vagrant, RackSpace, CoreOS, Fedore, Azure, AWS, and vSphere. It is one of the most active project on GitHub.

Kubernetes is portable across different clouds and provides ability to respond to increase customer demand quickly and efficiently. Applications can scale on-demand, deployment of applications can be made quicker and more predictive, providing flexibility to developers to roll out more and more enhancements and new features without using much resources in public or private cloud environments.

This article is published as part of the IDG Contributor Network. Want to Join?