Lessons learned securing Kubernetes in the cloud

For any company exploring the potential of the cloud and Kubernetes, adopting infrastructure as code, security as code, and automation will be essential.

Until recently, our global reinsurance company utilized a traditional on-prem infrastructure, relying solely on our own hardware at several disparate data centers spread around the world. However, we recognized that this infrastructure could delay some of our initiatives that demand more rapid application development and faster delivery of digital products and services.

This realization led us to pursue a new cloud infrastructure and new deployment processes for several workloads that would increase automation, reduce complexity, and support lean and agile operations. Naturally, security was top of mind as well. Moving some of our critical workloads from our huge singular network to the cloud, we needed to ensure our new environment could be continually hardened against potential threats.

Selecting a cloud, open source, and Kubernetes

The goal for my architecture team was to create small network deployments in the cloud whose resources would ultimately be owned by other teams. In this enabler role, we would provide the infrastructural basis for teams to achieve rapid deployments of innovative applications and get to market fast.

Our company is a Microsoft shop, so the choice to establish our new cloud infrastructure in Microsoft Azure was clear. Our next choice was to move to microservices-based applications, eyeing the possibilities of automation and both infrastructure as code and security as code.

While our security officers were initially wary of open source solutions, vetting cloud tools quickly led us to the realization that the best options out there are all open source. (Security concerns around open source, in my view, are outdated. Robust technologies with strong communities behind them are as secure, if not more so, than proprietary solutions.) The budgets of the projects our cloud infrastructure would support had to be factored in as well, incentivizing us away from proprietary licensing fees and lock-in. This made our commitment to open source a natural choice. 

To orchestrate our microservices infrastructure, my team was eager to try out Kubernetes. However, our first project involved work for a team that insisted on using licensed Docker Swarm, a popular option just before Kubernetes’s meteoric rise. We completed the project using Docker Swarm, with the arrangement that we could then experiment with putting Kubernetes to the same task. This comparison clearly proved Kubernetes as the superior choice for our needs. We then used Kubernetes for all subsequent projects.

Our Kubernetes cluster architecture in Azure

The Kubernetes clusters we deploy are accessible using real URLs, protected by security certificates. To accomplish this, our architecture on Azure includes a load balancer and a DNS zone belonging to the project, KeyVault (an Azure secure secrets store), and storage utilizing an Azure-native object store. Our architecture also includes a control plane within the cluster, fully handled by Azure. External access to each of these components is protected by traditional firewalls, strictly limiting access to only certain whitelisted IP addresses. (By default, access is limited to our own network as well.)

Our booster framework, which we use to kick off new projects, implements several components within the Kubernetes cluster. An ingress controller opens outside access to resources deployed within the cluster, such as project microservices. This includes an OAuth proxy that makes sure all ingress is authorized by Azure AD. An external DNS server creates the DNS service in the DNS zone. Our secrets controller fetches secrets from the Azure Key Vault (information which shouldn’t be kept in the cluster, and shouldn’t be lost if the cluster must be destroyed). An S3 API communicates with data storage sources. A certificate manager creates special certificates for TLS access, in our case for free using Let’s Encrypt.

We also use tools for monitoring, logging, and tracing. For monitoring we leverage the industry standards Prometheus and Grafana. Logging uses Grafana Loki. Tracing uses Jaeger. We also tapped Linkerd as our protective service mesh, which is an optional enhancement for Kubernetes deployments.

Kubernetes security visibility and automation

What’s not optional is having a Kubernetes-specific security solution in place. Here we use NeuVector as a Kubernetes-native container security platform for end-to-end application visibility and automated vulnerability management.

When we first considered our approach to security in the cloud, tools for vulnerability scanning and application workload protection stood out as the last line of defense and the most important to apply correctly. The Kubernetes cluster can face attacks through both ingress and egress exposure and attack chains that escalate within the environment.

To protect application development and deployment, every stage of the CI/CD pipeline needs to be continuously scanned for critical vulnerabilities or misconfigurations (hence NeuVector), from the build phase all the way through to production. Applications need to be protected from container exploits, zero-day attacks, and insider threats. Kubernetes itself is also an attack target, with critical vulnerabilities disclosed in recent years.

An effective Kubernetes security tool must be able to visualize and automatically verify the safety of all connections within the Kubernetes environment, and block all unexpected activities. You also need to be able to define policies to whitelist expected communication within the Kubernetes environment, and to flag or block abnormal behavior. With these run-time protections, even if an attacker breaks into the Kubernetes environment and starts a malicious process, that process will be immediately and automatically blocked before wreaking havoc.

The importance of infrastructure as code

Our Kubernetes deployments leverage infrastructure as code (IaC), meaning that every component of our architecture mentioned above can be created and recreated using simple YAML files. IaC enables crucial consistency and reproducibility across our projects and clusters. For example, if a cluster needs to be destroyed for any reason, or you want to introduce a change, you can simply destroy the cluster, apply any changes, and redeploy it. IaC is also useful for getting started with standing up development and production clusters, which use many of the same settings and then require only simple value changes to complete.

Importantly, IaC also enables auditing of all changes applied to our cluster. Humans are all too prone to errors and misconfigurations. This is why we have automation. Automation makes our secure deployments reproduceable.

The importance of security as code

For the same reasons, automation and security as code (SaC) are also crucial to setting up our Kubernetes security protections. Your Kubernetes security tool of choice should make it possible to leverage custom resource definitions (CRDs), objects you upload to the cluster as YAML files to easily implement and control security policies. Just as IaC ensures consistency and reliability for infrastructure, SaC ensures that complex firewalls and security services will be implemented correctly. The ability to introduce and reproduce security protections as code eliminates errors and greatly enhances effectiveness.

What’s next

Looking into the future of our Kubernetes infrastructure, we intend to embrace GitOps for deploying our framework, with Flux as a potential deployment agent. We also plan to use Gatekeeper to integrate Open Policy Agent with Kubernetes, offering policy control over authorized container creation, privileged containers, and so on.

For any organization beginning to explore the potential of the cloud and Kubernetes, I highly recommend investigating similar architecture and security choices to those I’ve outlined here, especially when it comes to automation and implementing infrastructure as code and security as code. Doing so should offer an easier road to successfully leveraging Kubernetes and harnessing its many benefits.

Karl-Heinz Prommer is technical architect at Munich Re.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.

Copyright © 2021 IDG Communications, Inc.

How to choose a low-code development platform