Call it “serverless,” call it “event-driven compute,” or call it “functions as a service (FaaS),” the idea is the same: dynamically allocate resources to run individual functions, essentially microservices, that are invoked in response to events. Serverless compute platforms allow application developers to focus on the app, not the underlying infrastructure and all of its management details.
Most cloud providers offer some kind of serverless platform, but you can build one yourself with only two ingredients. One is Kubernetes, the container orchestration system that has become a standard platform for building componentized, resilient applications. The second is any of a number of systems used to build serverless application patterns in Kubernetes.
Most of the serverless frameworks for Kubernetes have these features in common:
- Deploys to any environment that supports Kubernetes, locally or remotely, including environments like OpenShift.
- Supports running code written in any language, with some common runtimes prepackaged with the framework.
- Triggers the execution of code by many kinds of events—an HTTP endpoint, a queue message, or some other hook.
One major advantage of building serverless on Kubernetes is gaining far greater control over the underlying platform. Many serverless offerings restrict the behaviors of the functions they run, sometimes making certain classes of applications impractical. With Kubernetes, you can create a serverless platform that matches your needs, leaving infrastructure to your Kubernetes operators and letting your developers focus on writing essential code.
Here are five of the major projects bringing serverless functionality to Kubernetes.
Fission
Fission is created and maintained by the managed-Kubernetes company Platform 9. Its main claim to fame is it lets you create FaaS applications without having to build containers, just by supplying definition files.
Fission can be installed with or without a Helm chart, and can be installed in either of two editions. There’s a full-blown version with message queue and InfluxDB support for logging, and a stripped-down edition with basic function serving. The former is designed for production deployments, and the latter for getting your feet wet.
To add code to a Fission deployment, you use YAML-based spec files. Fission’s command-line tooling lets you create YAML files for your functions and the routes used to trigger their entry points. The spec file also lets you provide environment variables, auxiliary containers, volumes, and Kubernetes taint/toleration controls for the code.
Fission also provides “workflows.” Installed by Helm chart, workflows pass the output of one function to another function. The functions don’t even have to be in the same language. Note that this comes at a performance cost, as each function’s output is rendered into an interchange format, although the workflow system supports many common primitive binary types to keep overhead down (e.g., an integer, or a generic byte stream).
One of the downsides originally associated with FaaS was that the first time a function was invoked, there was a perceptible delay to launch the container associated with it. Fission keeps containers pre-warmed to minimize latency the first time a function runs.
Fission offers other conveniences for both developers and admins. The service can be deployed into a cluster that has no external internet access, and code can be hot-reloaded into the cluster on demand. Function activity can also be recorded and replayed to aid with debugging.
The Fission project is available under the highly liberal Apache license, so can be freely reworked as needed.
Knative
Originally created by Google to run serverless apps on Kubernetes, Knative focuses on patterns common to serverless deployments in production. Knative requires direct expertise with managing many Kubernetes components to use effectively, though.
In addition to Kubernetes, Knative requires a routing system or service mesh such as Istio, but other options like Ambassador and Gloo can be used too. This means a little more work setting up, but the project has detailed guides to using each option in a variety of cloud services and Kubernetes environments, including vanilla Kubernetes.
Knative works mainly by leveraging or extending existing Kubernetes tooling and functionality. Apps, or functions, are configured by way of YAML files and delivered as Docker containers that you build. Adding, modifying, or deleting definitions is done through the kubectl
command line app. For metrics on Knative apps, use Grafana. Scaling can be done with Knative’s own autoscaler, or with any other Kubernetes-compatible scaler including a custom-written one.
Knative is under heavy development, and many of its dedicated tools are still in a rough state. These include knctl
, a CLI specifically for Knative, which spares you the hassle of using Kubernetes’s other tools to manage Knative if you just want to focus on Knative; and ko
, a tool for building Go apps on Knative by eliminating the container build step.
Kubeless
Kubeless was created by Bitnami, the developers of easy installers for common web application stacks. Kubeless uses Kubernetes’s native Custom Resource Definitions to handle functions, so there’s slightly less abstraction between Kubernetes metaphors and Kubeless functionality.
Most common language runtimes come with the platform: .NET, Java, Python, Node.js, PHP, Ruby, Go, and even the new Ballerina language for cloud-native development. Runtimes are just Docker images, although Kubeless has a specific packaging format for using Dockerfiles to build custom runtimes.
Another handy Kubeless feature is its CLI, which is command-identical to the AWS Lambda CLI. This is tremendously convenient if you want to migrate away from AWS Lambda, but you want to preserve some of the existing management scripting, or just not have to learn a whole new command set.
Kubeless also works as a plug-in for the Serverless Framework, a system for building serverless applications on a variety of architectures. If you already use Serverless or Kubeless, you’ll have an easier time adding either one than using something else.
OpenFaaS
The pitch for OpenFaaS is “serverless functions made simple.” By simple, the developers mean “not much more difficult than deploying a Docker container.”
OpenFaaS can be deployed either to Kubernetes or to a Docker Swarm cluster (for local testing or low-demand use). You use the OpenFaaS CLI to build, push, and deploy Docker images into the cluster to run functions. Existing templates provide pre-made ways to deploy apps written in Go, Python, Node.js, .NET, Ruby, Java, or PHP 7, although you can always roll your own. The OpenFaaS CLI also provides you with ways to manage secrets in your cluster, while the built-in web UI allows you to create new functions and manage them.
Another version of OpenFaaS, OpenFaaS Cloud, repackages OpenFaaS with features for multiple developers including integration with Git (including GitHub and self-hosted editions of GitLab), CI/CD, secrets management, HTTPS, and the ability to feed events to Slack and other sinks. OpenFaas Cloud is available as a free open source product, and in a hosted version that is currently free to use.
OpenWhisk
Apache OpenWhisk is billed as a generic serverless platform. Kubernetes is only one of several options available for running containers in OpenWhisk, as OpenWhisk also supports Mesos and Docker Compose. Nevertheless, Kubernetes is preferred due to its tooling for app deployment, especially Helm charts. IBM Cloud Functions is based on the OpenWhisk project, so can work with OpenWhisk CLI commands as well.
Unlike most of the other serverless Kubernetes frameworks, OpenWhisk is written in the Scala language, not Go (which both Kubernetes and Docker are written in). This is likely to be an issue only if you want to hack on OpenWhisk, and you only have experience with Go.
Most of the popular application runtime options come prepackaged with OpenWhisk: Java, Node.js, Python, Ruby, PHP, and .NET. Plus, many esoteric and cutting-edge options are also included: Scala, Ballerina, Swift, and Rust. Runtimes are just Docker containers, so it’s easy to provide your own.
One convenient OpenWhisk deployment feature is “zip actions.” Point a .zip archive of code and auxiliary files to OpenWhisk using the manifest file for a code package, and OpenWhisk will create an action from it. The OpenWhisk CLI also includes tools to transform a directory tree of code into such an archive. And a catalog of service packages makes it easy to plug your application into common third-party offerings like GitHub, Slack, Apache Kafka, or Jira.