What is serverless? Serverless computing explained

Simple functions in isolation make development easier, while event-driven execution makes operations cheaper

What is serverless? Serverless computing explained
Thinkstock

Developers spend countless hours solving business problems with code. Then it’s the ops team’s turn to spend countless hours, first figuring out how to get the code that developers write up and running on whatever computers are available, and second making sure those computers operate smoothly. The second part truly is a never-ending task. Why not leave that part to someone else?

A lot of innovation in IT over the past two decades—virtual machines, cloud computing, containers—has been focused on making sure you don’t have to think much about the underlying physical machine that your code runs on. Serverless computing is an increasingly popular paradigm that takes this desire to its logical conclusion: With serverless computing, you don’t have to know anything about the hardware or OS your code runs on, as it’s all taken care of for you by a service provider.

What is serverless computing?

Serverless computing is an execution model for the cloud in which a cloud provider dynamically allocates—and then charges the user for—only the compute resources and storage needed to execute a particular piece of code. Naturally, there are still servers involved, but their provisioning and maintenance are entirely taken care of by the provider. Chris Munns, Amazon’s advocate for serverless, said at a 2017 conference that, from the perspective of the team writing and deploying the code, “there’s no servers to manage or provision at all. This includes nothing that would be bare metal, nothing that’s virtual, nothing that’s a container—anything that involves you managing a host, patching a host, or dealing with anything on an operating system level, is not something you should have to do in the serverless world.” 

As developer Mike Roberts explains, the term was once used for so-called back-end-as-a-service scenarios, where a mobile app would connect to a back-end server hosted entirely in the cloud. But today when people talk about serverless computing, or a serverless architecture, they mean function-as-a-service offerings, in which a customer writes code that only tackles business logic and uploads it to a provider. That provider takes care of all hardware provisioning, virtual machine and container management, and even tasks like multithreading that often are built into application code.

Serverless functions are event-driven, meaning the code is invoked only when triggered by a request. The provider charges only for compute time used by that execution, rather than a flat monthly fee for maintaining a physical or virtual server. These functions can be connected together to create a processing pipeline, or they can serve as components of a larger application, interacting with other code running in containers or on conventional servers.

Benefits and drawbacks of serverless computing

From that description, two of the biggest benefits of serverless computing should be clear: developers can focus on the business goals of the code they write, rather than on infrastructural questions; and organizations only pay for the compute resources they actually use in a very granular fashion, rather than buying physical hardware or renting cloud instances that mostly sit idle.

As Bernard Golden points out, that latter point is of particular benefit to event-driven applications. For instance, you might have an application that is idle much of the time but under certain conditions must handle many event requests at once. Or you might have an application that processes data sent from IoT devices with limited or intermittent Internet connectivity. In both cases, the traditional approach would require provisioning a beefy server that could handle peak work capacities—but that server would be underused most of the time. With a serverless architecture, you’d only pay for the server resources you actually use. Serverless computing would also be good for specific kinds of batch processing. One of the canonical examples of a serverless architecture use case is a service that uploads and processes a series of individual image files and sends them along to another part of the application.

Perhaps the most obvious downside of serverless functions is that they’re intentionally ephemeral and, as AlexSoft puts it, “unsuitable for long-term tasks.” Most serverless providers won’t let your code execute for more than a few minutes, and when you spin up a function, it doesn’t retain any stateful data from previously run instances. A related problem is that serverless code can take as long as several seconds to spin up—not a problem for many use cases, but if your application requires low latency, be warned.

Many of the other downsides, as pointed out by Rohit Akiwatkar and Gary Arora, have to do with vendor lock-in. Although there are open source options available, the serverless market is dominated by the big commercial cloud providers, as we’ll discuss in a moment. That means developers often end up using tooling from their vendors, which makes it hard to switch if they grow dissatisfied. And because so much of serverless computing takes place, by definition, on the vendor’s infrastructure, it can be difficult to integrate serverless code into in-house development and testing pipelines.

Serverless vendors: AWS Lambda, Azure Functions, and Google Cloud Functions

The modern age of serverless computing began with the launch of AWS Lambda, a platform based on Amazon’s cloud service, in 2014. Microsoft followed suit with Azure Functions in 2016. Google Cloud Functions, which had been in beta since 2017, finally reached production status in July 2018. The three services have slightly different limitations, advantages, supported languages, and ways of doing things. Rohit Akiwatkar has a good and detailed rundown on the distinctions among the three. Also in the running is IBM Cloud Functions, which is based on the open source Apache OpenWhisk platform.

Among all of the serverless computing platforms, AWS Lambda is the most prominent, and obviously has had the most time to evolve and mature. InfoWorld has coverage of updates and new features added to AWS Lambda over the past year.

Serverless stacks

As is the case in many software realms, the serverless world has seen the evolution of stacks of software, which bring together different components needed to build a serverless application. Each stack consists of a programming language that you’re going to write the code in, an application framework that provides a structure for your code, and a set of triggers that the platform will understand and use to initiate code execution.

While you can mix and match different specific offerings in each of these categories, there are limitations depending on which vendor you use, with some overlap. For instance, for languages, you can use Node.js, Java, Go, C#, and Python on AWS Lambda, but only JavaScript, C#, and F# work natively on Azure functions. When it comes to triggers, AWS Lambda has the longest list, but many of them are specific to the AWS platform, like Amazon Simple Email Service and AWS CodeCommit; Google Cloud Functions, meanwhile, can be triggered by generic HTTP requests. Paul Jaworski has an in-depth look at the stacks for each of the big three offerings.

Serverless frameworks

It’s worth lingering a bit on the framework part of the equation, since that will define much about how you end up building your application. Amazon has its own native offering, the open source Serverless Application Model (SAM), but there are others as well, most of which are cross-platform and also open source. One of the most popular is called, rather generically, Serverless, and emphasizes that it provides the same experience one each supported platform, i.e. AWS Lambda, Azure Functions, Google Cloud Functions, and IBM OpenWhisk. Another popular offering is Apex, which can help bring some languages otherwise unavailable on certain providers into the fray.

Serverless databases

As we noted above, one quirk of working with serverless code is that has no persistent state, which means that the values of local variables don’t persist across instantiations. Any persistent data your code needs to access must be stored elsewhere, and the triggers available in the stacks for the major vendors all include databases that your functions can interact with.

Some of these databases are themselves referred to as serverless. This means that they behave much like other serverless functions we’ve discussed in this article, with the obvious exception that data is stored indefinitely. But much of the management overhead involved in provisioning and maintaining a database is cast aside. As developer Jeremy Daly puts it, “All you need to do is configure a cluster, and then all the maintenance, patching, backups, replication, and scaling are handled automatically for you.” As with function-as-a-service offerings, you only pay for the compute time you actually use, and resources are spun up and down as needed to match demand.

The big three serverless providers each offer their own serverless databases: Amazon has Aurora Serverless and DynamoDB, Microsoft has Azure Cosmos DB, and Google has Cloud Firestore. These aren’t the only databases available, though. Nemanja Novkovic has information on more offerings.

Serverless computing and Kubernetes

Containers help power serverless technology under the hood, but the overhead of managing them is taken care of by the vendor and thus invisible to the user. Many see serverless computing as a way to get many of the advantages of containerized microservices without having to deal with their complexity, and are even beginning to talk about a post-container world.

In truth, containers and serverless computing will almost certainly coexist for many years to come, and in fact serverless functions can exist in the same application as containerized microservices. Kubernetes, the most popular container orchestration platform, can manage serverless infrastructure as well. Indeed, with Kubernetes, you can integrate different types of services on a single cluster.  

Serverless offline

You might find the prospect of getting started with serverless computing a little intimidating, since it seems like you’d need to sign up with a vendor to play around and see how it works. But fear not: There are ways to run serverless code offline on your own local hardware. For instance, the AWS SAM provides a Local feature that allows you to test Lambda code offline.  And if you’re using the Serverless application framework, check out serverless-offline, a plug-in that lets you run code locally. Happy experimenting!