Azure updates AKS with new Kubernetes technologies

Microsoft continues to evolve Azure’s container orchestration platform, adding proxy and WebAssembly support.

Azure updates AKS with new Kubernetes technologies

Kubernetes is the foundation of much modern cloud-native software. Although it’s a mature technology that’s important for Azure and other hyperscale clouds, Kubernetes is definitely not standing still. Regular updates add features, while a growing ecosystem builds tools and technologies that integrate with the underlying platform. It’s not surprising that managed Kubernetes platforms like Azure Kubernetes Service (AKS) adopt new technologies more quickly than other cloud services—even with the rapid pace of development that cloud offers.

Recent updates to AKS have improved application security by adding preview support for HTTP and HTTPS proxies, along with bringing the Krustlet project more into the mainstream with WebAssembly System Interface node pools. Both are currently available as opt-in previews, best considered for prototypes and experiments. However, that does mean they’re intended for production with service-level agreement support in the next few months (though probably longer for WebAssembly), and it’s well worth giving them a look to see if they meet your requirements.

Adding proxy support to Kubernetes

Support for HTTP and HTTPS proxies is likely to be the most useful feature in the near term, as it allows you to run Kubernetes clusters behind proxies, making it possible to run them in isolated networks. This is helpful if you want to use an Azure VNet to protect your services when working with sensitive data or using a hybrid cloud to extend on-premises Kubernetes applications into Azure. It’s also a likely approach if you’re using AKS in Azure Arc on your own infrastructure via Azure Stack hyperconverged infrastructure (HCI) or if you’re using a managed Azure Stack appliance on the network edge.

AKS’s HTTP proxy support allows you to bring required network services and traffic into your private network, chaining connections through proxies. Along with networking capabilities, it includes tools to manage certificates to ensure that your isolated nodes and clusters are still part of a full chain of trust.

Getting started is simple enough. Taking a cue from modern application development best practices, AKS’s proxy support is kept behind a feature flag. Open the Azure CLI and use the feature register command to register the HTTPProxyConfigService in Azure’s Container Service. This will take some time to enable, which you can check using the feature list command. Once it’s enabled, reregister the Container Service to use the new feature.

Once the feature has been enabled, you can start to use it in your Kubernetes clusters. However, you can’t enable it on existing clusters at present; proxy support has to be added during cluster creation. Here you will need to use the aks create command, with a JSON or YAML configuration file. This contains the URLs of both HTTP and HTTPS proxies, with a list of domains that are excluded from the proxy service. Finally, if you’re using a certificate authority, you’ll need to include a base64-encoded subject alternative names certificate in PEM format. The same proxy details can be used in an ARM template.

Once configured, a cluster’s proxy settings can’t be changed without completely setting up a whole new cluster. The only part of a configuration that can be changed is the certificate authority certificate, in order to support rollover (especially if you’re using short-lived certificates from a service such as Let’s Encrypt).

While proxies are easy to set up and use, AKS support is clearly very early. For one thing, some important scenarios aren’t supported. Currently proxy support is only for Linux-based clusters, where all the node pools in a cluster have the same proxy configuration and you’re not using Virtual Machine Availability Sets. However, these are relatively minor issues, and workarounds are possible.

Using Krustlets in AKS

The other big new release is preview support for WebAssembly-powered nodes as an alternative to running containers. This approach is especially interesting when it comes to resource-constrained environments running relatively simple services. A container may appear small, but it requires significant support resources to host an application userland. WebAssembly (WASM), particularly the browserless WebAssembly Standard Interface (WASI) environment, requires minimal system resources beyond a supported JavaScript environment, automatically sandboxing all your code.

A while back I wrote about the experiments Microsoft’s Deis Labs was doing with Krustlets, a way of using WASI in Kubernetes nodes. It was an intriguing alternative to heavyweight containers, offering a way to run Kubernetes on small edge devices. Bringing Krustlets into AKS as WASM/WASI node pools is an interesting way of extending it, both in the Azure cloud and on Azure Arc AKS instances on edge hardware.

Running WASM node pools requires some prerequisites, as it’s not as mature as other AKS previews. Getting it out early is an interesting step for Microsoft. There’s significant interest in WASM and Kubernetes, as shown by the popularity of the Cloud Native WASM Day at KubeCon North America 2021, so it’s good to see Azure getting ahead of the curve here, rolling out the tools developers will need to build and test WebAssembly distributed applications at scale. There’s an interesting crossover, too, with Deis Labs’ work on the WebAssembly Gateway Interface (WAGI) and tools like the Hippo development environment, which should help developers design and build Krustlet-based microservices.

Like working with the HTTPS proxy tools, you need to enable the WasmNodePoolPreview feature flag in ContainerService via the Azure CLI. Once it’s enabled, refresh the container service to ensure it’s fully registered. You will next need to install a preview release of the AKS Azure CLI extension. If you’re already using it, make sure it’s up to date.

You can now add a WebAssembly node pool to an AKS cluster. This must be running on Linux and needs to be separate from any container-based nodes. Once up and running, you can deploy WebAssembly workloads to your node pool. You’ll need to ensure that it’s set up to only run wasm32-wagi pods, so that AKS won’t schedule containers on your WebAssembly nodes and at the same time prevent your WebAssembly pods from loading on standard container pods. This is one area that will need automation in future releases, so be careful to keep the two technologies separate in your prototypes.

Microsoft provides a set of sample WebAssembly modules that you can load using kubectl, along with a pre-configured YAML file to configure your test application. This can help you with future applications, giving you a structure that can be customized to work with your own code. Finally, you can set up a reverse proxy to test your WebAssembly application, using a Helm chart to load the Nginx load balancer to give it an external IP address.

This is another step that will likely be automated in future releases, as Microsoft moves to bake WebAssembly support into AKS. Even so, it’s good to see experimental support arriving in preview. WASI and WAGI are still very new technologies, not ready for prime-time use, and although Microsoft clearly sees a long-term future for them in Kubernetes, you’re unlikely to be using them in production for at least another year.

These previews show that it’s time to start seeing what they can do for you. WebAssembly node pools are well worth exploring, and there’s a synergy with support for HTTPS proxies as a way of gatewaying WAGI-based microservices outside of VNets and private clouds. With AKS part of Azure Arc, there’s a lot of scope for delivering these services to devices outside the Azure cloud, while still using Azure and the Azure CLI as a management layer.

Microsoft has made a big commitment to Kubernetes, both for its own services and to support your code. Removing its dependency on containers via Krustlets should make it easier to start new instances of your code as needed while using fewer resources. The result should be faster, more lightweight services, and lower compute costs. It won’t happen overnight, but it’s a development you should be getting ready for now.

Copyright © 2021 IDG Communications, Inc.

How to choose a low-code development platform