How AWS Lambda’s serverless functions work

A first-hand, step-by-step look at the ease and simplicity of Amazon's "function as a service" platform

How AWS Lambda’s serverless functions work

Why would a developer use AWS Lambda? In a word, simplicity. AWS Lambda—and other event-driven, “function-as-a-service” platforms such as Microsoft Azure Functions, Google Cloud Functions, and IBM OpenWhisk—simplify development by abstracting away everything in the stack below the code. Developers write functions that respond to certain events (a form submission, a webhook, a row added to a database, etc.), upload their code, and pay only when that code executes.

In “How serverless changes application development” I covered the nuts and bolts of how a function-as-a-service (FaaS) runtime works and how that enables a serverless software architecture. Here, I take a more hands-on approach by walking through the creation of a simple function in AWS Lambda and then explaining some common design patterns that make this technology so powerful.

AWS Lambda, the original FaaS runtime, was first announced in 2014. The most common example used to explain how the event-driven, compute-on-demand platform works remains this one, the resizing of an image uploaded to Amazon S3:

aws lambda 01 Amazon

A picture gets uploaded to an S3 bucket, triggering an event that executes a Lambda function. Prior to the event being triggered, the function sits in a file on disk; no CPU resources are used (or billed) until the work arrives. Once the trigger fires, the function is loaded into the AWS Lambda runtime and passed information about the event. In this example, the function reads the image file from S3 into memory and creates thumbnails of varying sizes, which it then writes out to a second S3 bucket.

Let’s take a closer look. I won’t go to the trouble of implementing the image resizing code, but I’ll create the skeleton of the Lambda code needed to implement this example, set up the trigger, and test my code. I’ll also dig into the CloudWatch logs to debug a little permissions issue I ran into.

Creating an AWS Lambda function and trigger

There are many ways to create a Lambda function, including plug-ins for IDEs like Eclipse and tools like the Serverless Framework. But the easiest way to start is to use one of the blueprints provided by AWS. If you go to the AWS Lambda console and click Create New Function, you get the following:

aws lambda 02 IDG

I’ll use Node.js to create a function that reacts to an S3 event, so I’ll choose Node.js 6.10 from the Select Runtime menu and enter S3 into the Filter dialog:

aws lambda 03 IDG

Clicking the s3-get-object blueprint takes you to the Configure Triggers page:

aws lambda 04 IDG

Here, I’ll set the bucket I’ll use to generate the events (infoworld.walkthrough) and set the event type to trigger whenever a new object is created in that bucket. I could further filter the events to fire only when certain prefixes or suffixes in object names are present, but I’ll skip that and click the check box to enable the trigger before pushing the Next button.

That creates the skeleton of a function to be created based on the blueprint:

aws lambda 05 IDG

I’ve given my function the name infoworldWalkthrough. Although I’ll be showing the code more closely in a moment, you can see that it automatically retrieves information about the object that caused the trigger.

Further down that same configuration page, I need to set some permissions:

aws lambda 06 IDG

Every function must have an IAM role assigned to it so you can control its access to AWS resources. Here, I’ve asked the system to create a new role called infoworldRole and given that role read-only permissions to S3. If I were going to implement the full canonical example and generate the thumbnails, I’d also want to add S3 write permissions. However, because I will only be reading information about the triggered S3 object, the read-only permission should be sufficient.

Finally, I need to pay close attention to some Advanced Settings:

aws lambda 07 IDG

The most important items here are in the top section where I set the amount of memory and the execution timeouts. Remember that the Lambda runtime draws on an assembly line of containers, which are preloaded with the various language runtimes. When an event triggers, it loads your code into one of these containers and execute our function. The memory and timeout settings dictate how big that container will be and how much time the function will have to execute. For this turorial, the defaults of 128MB and 3 seconds are fine. For other use cases, these settings are commonly changed.

Clicking Next takes me to a screen where I can review all of the settings I’ve entered so far:

aws lambda 08 IDG

Pressing the Create Function button will take our input and create our function in AWS Lambda.

Examining AWS Lambda code

Here’s the default code that is created by the blueprint:

aws lambda 09 IDG

On lines 14 and 15, the Lambda function extracts the name of the bucket and the object name (also called the key) that caused the trigger. It then uses the S3 API to get more information about the object and (if that goes smoothly) outputs its content type. I haven’t done so here, but I could easily include the code that then reads in the object and generates the thumbnails accordingly.

Testing AWS Lambda code

Now I’ll go to the S3 console for the bucket in question, which in this case starts out completely empty:

aws lambda 10 IDG

And I’ll upload a PNG of the InfoWorld logo to that bucket:

aws lambda 11 IDG

And then … what exactly?

It’s not clear from the S3 console whether the function has executed, and if you go to the Lambda console, you’ll find a similar lack of information. However, every Lambda function logs information via CloudWatch, so if you check CloudWatch you’ll see that I now have a new log group for my function:

aws lambda 12 IDG

And examining this log reveals that access to the S3 bucket was denied:

aws lambda 13 IDG

For some mysterious reason, when my code tried to read information about the S3 object, it was denied access to that data. But why? Didn’t I set up the IAM role so that my function had read-only permissions on my S3 buckets? Let’s double-check that in the IAM console:

aws lambda 14 IDG

Yes, in fact the role has a policy. So let’s take a look at that policy:

aws lambda 15 IDG

Oddly, I have permissions to create logs in CloudWatch, but there’s no mention of S3 anywhere. Somehow, my S3 read-only permissons policy didn’t take. Let’s fix that.

If I click the Attach Policy button, I’ll see this screen:

aws lambda 16 IDG

By selecting the AmazonS3FullAccess option and clicking the Attach Policy button, I should be giving my function all the permissions it needs.

Instead of testing the function by manually adding a PNG file to the S3 bucket as I did before, this time I’ll use the test hooks built into Lambda. Back to the homepage for my function:

aws lambda 17 IDG

Now if I click the Test button, wI’ll get a dialog that lets me choose from among many sample events. I want to test an S3 put. I’ll need to edit the values in the S3 key and bucket name fields to correspond to the names of my image file and bucket, respectively:

aws lambda 18 IDG

There are all kinds of other fields in the event that could be set here, but because I know my code looks only at the key and the bucket name, I can ignore the rest. Clicking the Save and Test button will trigger the event and cause the function to execute. Unlike last time, when I triggered the event through the S3 console, this time I see live feedback. I also get the relevant portion of the CloudWatch log right there in the Lambda UI:

aws lambda 19 IDG

You can see that the code executed and identified the content type as expected.

IDE integrations and command-line tools like the Serverless Framework accelerate this process dramatically, but this walkthrough has shown the basic steps involved in creating a function with the right permissions, setting up the event, and debugging the code through CloudWatch, along with two ways of triggering the event so the function can be tested.

AWS Lambda design patterns

Let me wrap up by looking at some common Lambda design patterns.

Several design patterns that have emerged for serverless application architectures. A session at Amazon's ReInvent conference titled Serverless Architectural Patterns and Best Practices highlighted four such patterns. Here, I’ll introduce my two favorites because they represent low-hanging fruit for any organization wanting to get started with serverless architectures.

First, it is easy to build web applications that use S3 and CloudFront for static content and use of API Gateway backed by Lambda and DynamoDB for dynamic needs:

aws lambda 20 Amazon

That basic pattern can be locked down tightly with security at multiple levels:

aws lambda 21 Amazon

The bulk of the content for a web application tends to be read-only for all users, and this model can be served cheaply from S3 and CloudWatch. Authorized data can take advantage of IAM hooks into API Gateway along with IAM roles for individual AWS Lambda functions that interact with a DynamoDB.

My second favorite use case—one implemented by Capital One for its Cloud Custodian project—is to set up automation hooks using Lambda. In Capital One’s implementation, CloudWatch log events trigger Lambda functions to run checks against compliance and policy rules specific to Capital One. When potential issues have been found, the function generates notifications through Amazon SNS, which can be configured to send SMS messages, emails, and a other mechanisms to alert the right people to policy violations that require their attention.

aws lambda 22 Amazon

I like this automation pattern because it adds enormous value to an existing process without disturbing that process in any way. System compliance is automated without touching the systems being monitored. And like the previous pattern, it offers an easy way for an organization to get its feet wet with serverless.

Thinking outside the server

As I’ve shown, setting up a Lambda function, configuring an event, applying security policies, and testing the results is a snap—even without an IDE or command-line tools. Microsoft, Google, and IBM offer similarly easy onboarding for their FaaS runtimes. Plus design patterns are emerging that will undoubtedly pave the way to even higher orders of tooling and reuse.

Serverless application architectures represent a very different mindset. The pieces of code are smaller, they execute only when triggered to reduce cost, and they are tied together through loosely coupled events instead of statically defined APIs. Serverless enables far more rapid development cycles than were possible previously, and with simple automation and web application design patterns to draw on, it is easy to get started with low risk.

Copyright © 2017 IDG Communications, Inc.