Kubernetes autoscaling for event-driven workloads

A Microsoft and Red Hat open source collaboration, KEDA, brings event-driven autoscaling to any Kubernetes cluster

Kubernetes autoscaling for event-driven workloads
PeopleImages / Getty Images

Kubernetes, in all its many forms, is a powerful tool for building distributed systems. There’s one big problem though: Out of the box it’s only designed to offer resource-based scaling. If you look at its history (coming from Google’s internal Borg service as a response to AWS), that decision isn’t surprising. Most of the applications and services it was designed to work with were resource-bound, working with large amounts of data, and dependent on memory and CPU.

Not all distributed applications are like that. Many, especially those that work with IoT (Internet of things) systems, need to respond rapidly to events. Here it’s I/O that’s most important, providing events and messages that trigger processes on demand. It’s a model that works well with what we’ve come to call serverless compute. Much of the serverless model depends on rapidly spinning up new compute containers on demand, something that works well on dedicated virtual infrastructures with their own controllers but isn’t particularly compatible with Kubernetes’ resource-driven scaling.

Introducing KEDA: Kubernetes-based event-driven autoscaling

Microsoft and Red Hat have been collaborating on a means of adding event-driven scaling to Kubernetes, announcing their open source KEDA project at Microsoft’s Build conference back in May 2019. That initial KEDA code quickly got a lot of traction, and the project recently unveiled its 1.0 release, with the intent of having the project adopted by the Cloud Native Computing Foundation.

KEDA can be run on any Kubernetes cluster, adding support for a new set of metrics that can be used to drive scaling. Instead of only responding to CPU and memory load, you’re now able to respond based on the rate of received events, reducing the risk of queuing delays and lost event data. Since message volumes and CPU demands aren’t directly linked, a KEDA-enabled cluster can spawn new instances as messages arrive, well before traditional Kubernetes metrics would have responded. It can also support clusters scaling down to zero when queues are empty, keeping costs to a minimum and allowing Kubernetes clusters to behave like Azure Functions.

To continue reading this article register now

How to choose a low-code development platform