How to use the Kubernetes Ingress API

You can greatly expand the capabilities of the Ingress resource by using an Ingress controller like Kong for Kubernetes that uses custom resource definitions and provides many plug-ins

Kubernetes is seeing adoption across the tech industry and is on the path to become the de-facto orchestration platform for modern cloud service delivery. Kubernetes not only provides primitives for deploying microservices in the cloud but goes one step further, helping developers define interactions and manage the lifecycle for their APIs. 

The Ingress API in Kubernetes allows you to expose your microservice to the outside world and define routing policies for your north-south traffic, i.e. the traffic coming into your virtual data center. 

The benefits of managing API lifecycles using continuous integration and continuous delivery (CI/CD) pipelines with Ingress are plentiful, but before we cover this, let’s start with some foundational knowledge.

The design and purpose of the Ingress resource

The simplest description of a Kubernetes cluster would be a set of managed nodes that run applications in containers. In most cases, the nodes in a Kubernetes cluster are not directly exposed to the public internet. This makes sense, as exposing all the services on a node would create an incredible amount of risk. In order to provide public-facing access to selected services, Kubernetes provides the Ingress resource.

The Ingress resource exposes HTTP and HTTPS routes from outside the cluster to selected services within. The Ingress resource also provides rules to control the traffic. This makes the Ingress resource a great solution for handling the various APIs provided by a large amount of individual services. It does this by providing a single entry point for all clients and then handling requests to the back-end services. This is commonly known as a fanout configuration.

k8s ingress 01 Kong

The Ingress resource can also be set up for name-based virtual hosting, where it will route requests based on the host header:

k8s ingress 02 Kong

In order for the Ingress resource to work, an Ingress controller needs to be installed on the Kubernetes cluster. The controller creates the bridge between the Kubernetes cluster and the various public facing interfaces that exist. For example, most cloud providers hosting Kubernetes provide a unique Ingress controller to interface with their prescribed public facing methods. The various controllers all operate differently from one another and can provide a varying amount of additional functionality.

The benefits of using Ingress to manage API lifecycle using CI/CD pipelines

The Ingress resource is defined through a declarative configuration file, which is usually described in YAML. This is consistent with all Kubernetes resources and allows for straightforward integration into modern deployment patterns such as the combined practice of CI/CD. What this amounts to is the ability to deploy Ingress changes fast, frequently, and safely. This way, the Ingress resource can be incorporated into the same type of software development lifecycle patterns as the applications themselves.

How developers can accomplish Ingress using Kong for Kubernetes

A popular open source and cloud-agnostic Ingress controller is Kong for Kubernetes. The Kong for Kubernetes Ingress Controller is built as custom resource definitions (CRDs) within Kubernetes. This creates a Kubernetes-native experience for those already accustomed to defining resources within this platform.

Like your apps and services, Kong for Kubernetes can be installed via Manifest, Helm, or Kustomize.

The Kong for Kubernetes Ingress Controller expands the capabilities of the Ingress resource by providing an extensive set of plug-ins that covers a wide range of capabilities including authentication, analytics, monitoring, and request and response transformations, just to name a few. By providing these common (and sometimes not so common) requirements on the Ingress controller, Kong for Kubernetes allows developers to focus more on the core requirements of the services. The value of this becomes especially apparent when an organization moves from a handful of monolithic applications to hundreds, if not thousands, of microservices.

For a list of common plug-ins, check out https://docs.konghq.com/hub/.

Kong plug-ins are defined as a Kubernetes resource, where a config section provides for the individual plug-in’s settings.

Below is an example of a rate-limiting plug-in that will limit the traffic to five requests per minute:

k8s ingress 03 Kong

Adding a Kong plug-in to a Kubernetes resource is done through a simple annotation in the metadata section of the resource. This allows for the plug-ins to be applied to different tiers. For instance, you could apply a plug-in to the whole Ingress resource or apply one in a finer grained manner to an individual service resource.

Here is an example of the above plug-in being applied to an Ingress resource:

k8s ingress 04 Kong

Kong for Kubernetes can also be integrated into the full suite of Kong Enterprise products including Kong Studio, Kong Dev Portal, Kong ManagerKong Brain, and Kong Immunity. This allows for even more advanced Kong plug-ins as well as a full API lifecycle solution. This suite of products covers the authoring and publishing of API specs as well as the management of your Kong resources and even analysis of traffic.

You can take a “spec-first” approach toward developing your APIs using Kong Studio, where you will find tools for writing documentation in the standard OpenAPI specification along with testing tools for immediate feedback. Kong Studio also provides tools for working with GraphQL. Kong Studio syncs directly into Git, which allows your spec files to be integrated into a CI/CD workflow that can automate updates to Kong Dev Portal.

Kong Dev Portal hosts your API documentation (which can be private or public). It is extremely customizable, allowing you to conform it to your organization’s style and branding. Having a well-documented API is important for productivity, and having a well-managed flow between Kong Studio and the Dev Portal can help ensure that the documentation is as up-to-date as possible.

Kong Manager provides a graphical interface to observe and manage the Kong suite of products as a whole. From here, you can observe the relationships between your routes, services, and plug-ins. You can get a real-time eye on traffic and track your consumers.

Kong Brain analyzes traffic coming through the Ingress and creates a visual service map of inter-service dependencies. It also has the ability to auto-generate OpenAPI spec documents based on the maps it generates. This is a valuable feature, as even with the best intentions, services deployed may not be documented properly. 

Kong Immunity analyzes all the traffic coming through the Ingress and learns patterns to identify anomalies. These are often subtle requests that don’t stand out but could be of interest, such as an unknown parameter that keeps trying to get through. This is also a very valuable feature as spotting these needles in the haystack of hundreds of thousands of log entries is not easy.

k8s ingress 05 Kong

Making the most of Ingress

The Kubernetes Ingress resource provides a single entry point from outside Kubernetes to back-end services within. By leveraging declarative definition files, the Ingress resource can be treated like all other forms of code and be integrated into common software development lifecycles.

In order to bridge communication outside of Kubernetes, an Ingress controller is required. Kong for Kubernetes is an Ingress controller that uses custom resource definitions to greatly expand the capabilities of the Ingress resource by providing a large number of plug-ins, allowing developers to focus on core business value. Kong has a suite of enterprise tools that can greatly enhance productivity and security around your entire API lifecycle.

Marco Palladino, an inventor, software developer, and Internet entrepreneur based in San Francisco, is the CTO and co-founder of Kong Inc.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.

Copyright © 2020 IDG Communications, Inc.