Getting Started with Lambda Container Images

Tags: , ,

Lambda Container Images were announced at re:Invent 2020, providing a new way to build and deploy Lambda functions. They arrived just in time to solve an annoying build problem for me, so got my attention. And there weren’t any tutorials floating around when I first Googled, so I figured it was worth writing one.

But first, let’s get one thing out of the way: Lambda Container Images are not a way to run arbitrary Docker images within the Lambda execution environment. Well, that’s not strictly true: I uploaded the Docker hello-world image to my ECR registry and ran it, although Lambda complained that it exited without a valid return code. But if you just want to run Docker images without managing servers, look at ECS or AWS Batch; don’t try to shoehorn them into Lambda.

To operate “as a Lambda,” your container must interact with the Lambda Runtime API: an HTTP endpoint that your container’s code polls to be notified of invocation events. With normal Lambdas you don’t see this polling loop: it’s managed by the framework, which calls your deployed code when it finds an event. As long as you interacted with the Runtime API, you could deploy arbitrary code into a Lambda runtime since at least early 2018; there’s an awslabs project that shows you how to develop a C++ lambda. I’m sure that solved some class of problems, but I never ran into one of them.

Lambda Container Images let you deploy arbitrary code into a Lambda runtime, again with the caveat that it has to interact with the Runtime API. It’s a little easier, and if you use one of the pre-built images you can continue focusing on your application and not the polling loop. But I don’t think I’d welcome it quite so warmly if all it let me do was write a Lambda in C++.

For me, the big benefit of Lambda Container Images is that it lets me package dependencies with my Lambda code. This is best explained with an example.

If you use Python and Postgres, chances are good that you use the psycopg2 library. This is a wrapper around the Postgres native libraries, but unfortunately, the Postgres native libraries aren’t available in the default Lambda environment. One work-around is to depend on the psycopg-binary package, which includes the nativ libraries, but it depends on the version of Linux that you’re using. If you build on Ubuntu, you’ll get a version that’s incompatible with the AWS Linux that runs your Lambdas.

There are, of course, hoops that you can jump through to make this work. But with a Lambda Container Image, it becomes a simple matter of installing the library from within your Dockerfile. The rest of this post walks you through the steps. If you’d like to do it yourself, the code is on GitHub.

The Lambda Function

Here’s my example Lambda function:

import psycopg2

def lambda_handler(event, context):
    print("lambda handler running")

That’s right, the function itself does nothing other than to report the fact that it was invoked. But as part of its startup, it tries to import the psycopg2 module. If you just paste this into the Lambda console, this is the error you’ll see in the logs:

[ERROR] Runtime.ImportModuleError: Unable to import module 'lambda_function': No module named 'psycopg2'

OK, that’s expected: psycopg2 is a third-party module, not part of the standard Lambda distribution. Like any third-party module, we have to package it into our deployment ZIP. Here’s how I would do that running on Ubuntu (actually, I’d use a requirements.txt, but this conveys the process better):

pip install --system --target build psycopg2-binary
cp -r src/* build
cd build ; zip -r ../ .

So, everything should be good. I upload to my Lambda function, run it, and …

[ERROR] Runtime.ImportModuleError: Unable to import module 'handler': No module named 'psycopg2._psycopg'

This message looks like the earlier one, but is actually different: it references psycopg2._psycopg as the missing module. But if you look in the build folder, you can see that this module does exist, and is implemented as a Linux shared object (library); in my case, its name is However, on Lambda, this version of the module isn’t usable (because, I believe, it has a missing dependency).

OK, that was disappointing. At this point, before to Lambda Container Images, I would have had several options:

  • I could spin up an AWS Linux instance on EC2 and run my builds there. That would at least get me the right version of the psycopg2 binary, but it’s inconvenient, especially if I already have a CI/CD pipeline running somewhere else.
  • I could, instead, run pip once on that EC2 instance, and ZIP up the results, and store it as a Lambda layer. This is also inconvenient, since I have to do it every time the library changes, and update all of my Lambdas to use the latest version of the layer. But it does have the benefit that my actual deployment bundles are much smaller.
  • I could use the lambci Docker image, and run my build within it. This has, apparently, become a very popular option, but comes with its own issues (accidentally making root the owner of your development environment being one of them).

Now, with Lambda Container Images, I can instead create a Docker image that includes all of my dependencies.

Building a Lambda Container Image

AWS provides pre-built images for each of the standard runtimes. These images include all of the libraries that you’ll find in a real Lambda environment, as well as the code to interact with the Lambda Runtime API. To use one of these images, you simply move your build commands into the Dockerfile:

FROM amazon/aws-lambda-python:3.8

COPY requirements.txt /tmp
RUN pip install -r /tmp/requirements.txt

COPY src/ /var/task

CMD [ "handler.lambda_handler" ]

To build an image from this Dockerfile:

docker build -t lambda_container_example:latest .

Before moving on to actually using this image, I want to talk a little bit about how the AWS images are structured, and why CMD has the value that it does.

When you build a Docker image, you have two ways to specify what happens when it runs: CMD and ENTRYPOINT. If you read the documentation, these seem interchangeable: both can specify a program to run when the container starts. However, there is a difference: if you specify ENTRYPOINT, then that program is run on startup and CMD (if present) provides it with command-line arguments.

In the case of Lambda Container Images, ENTRYPOINT is defined by the base container and specifies the “bootstrap” program, /; CMD must be defined by your Dockerfile, and specifies the fully-qualified name of your handler function.


Remember how I kept mentioning that your Lambda has to interact with the Runtime API? The pre-built containers come with an emulator for that API, so that you can run the container locally:

docker run -it --rm -p 9000:8080 \
       lambda_container_example:latest \

This starts a web-server inside the container, exposed on port 9000 (pick a different port if you like). To invoke the Lambda, switch to a different window and use curl (or your tool of choice) to POST a request to that server:

curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" \
     -d '{"test": "value"}'

The URL is fixed; as you can see, it doesn’t involve your Lambda’s name. The -d option provides the invocation event, which must be a JSON string. My example doesn’t use the invocation event, but I wanted to show it for completeness. In the real world, your Lambdas will probably expect a more complex event; in this case, use -d @filename to load the event data from a file.

Switching back to the window where the container is running, you’ll see the familiar Lambda log messages, along with the message that my handler wrote:

START RequestId: f0c58cc7-9e91-4f00-86a8-c728ced724b5 Version: $LATEST
lambda handler executing
END RequestId: f0c58cc7-9e91-4f00-86a8-c728ced724b5
REPORT RequestId: f0c58cc7-9e91-4f00-86a8-c728ced724b5	Init Duration: 0.38 ms	Duration: 118.23 ms	Billed Duration: 200 ms	Memory Size: 3008 MB	Max Memory Used: 3008 MB	

While this is a useful way to verify that your Lambda properly handles its invocation event, the turnaround time from building a Docker file is still much too long to make this a viable method for testing during development. Unit tests, with or without mock objects, remain the best way to test your Lambda’s logic.


To deploy your containerized Lambda, you first need to upload it to a Docker registry. And not just any registry: as-of this writing, you must upload to an ECR registry in the same account where the Lambda will run.

This means that if you have a multi-account deployment strategy (dev/qa/prod), you’ll need to tag and push the image for each of those accounts, or use private image replication. This may cause some agitas if you’re in a regulated environment. Hopefully it will be sufficient to show that the images in those registries all have the same SHA (but I am not a lawyer, so don’t point your auditors to this blog post without consulting one first).

When you configure your ECR repository, you must add a repository policy that grants the Lambda service ( the following IAM permissions:

  • ecr:BatchGetImage
  • ecr:DeleteRepositoryPolicy
  • ecr:GetDownloadUrlForLayer
  • ecr:GetRepositoryPolicy
  • ecr:SetRepositoryPolicy

If you create your Lambda via the Console wizard, it will create this policy for you (named LambdaECRImageRetrievalPolicy). If you use CloudFormation or Terraform, you will must do it yourself.

The Console will also show you the commands needed to tag and push your image. Or you can use the instructions below, replacing the account number (1234567890120), region (us-east-1), and repository name with those corresponding to your repository. Note that you will need to execute these commands with the credentials of an AWS user that has permission to push to the repository.

aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin

docker tag container-example:latest

docker push

Once you’ve pushed the container to a registry, you can configure a Lambda to use it. If you’re using the Console to create your Lambda, this is simple: choose the “Container image” wizard, and it will ask you for the information that you need. However, if you use CloudFormation or Terraform, you’ll need to use some different attributes than for a “traditional” Lambda:

  • There’s a new attribute, PackageType, that must be set to “Image” for Lambdas running a container. It may be omitted or set to “Zip” for “traditional” Lambdas.
  • The Runtime and Handler attributes are no longer used.
  • There’s a new sub-attribute of Code, ImageUri, which you use instead of the S3 and ZipFile attributes.

Once you’ve created the Lambda, you can create a dummy test event in the console and invoke it. Assuming that you’re using the example, you’ll see messages like these in your Lambda’s CloudWatch log group:

START RequestId: c20db924-e7d6-4cab-b373-f42e3a92be09 Version: $LATEST
lambda handler executing
END RequestId: c20db924-e7d6-4cab-b373-f42e3a92be09
REPORT RequestId: c20db924-e7d6-4cab-b373-f42e3a92be09	Duration: 1.19 ms	Billed Duration: 1056 ms	Memory Size: 256 MB	Max Memory Used: 56 MB	Init Duration: 1054.52 ms	

Closing Thoughts

To wrap up, I want to reiterate that I believe Lambda Container Images have some very limited use cases. They don’t let you run arbitrary Docker containers, and they don’t lift the 15 minute execution timeout that Lambda imposes. While they can be used to let you write Lambdas in alternate languages, remember that you have to interact with the Lambda Runtime API; you’re no longer able to focus on your business logic and ignore the execution environment.

One other problem with Lambda Container Images is their performance. If you look at the log message above, you’ll see that it took my container over a second to initialize, which is far too long for latency-sensitive applications like web API servers. Unlike Java Lambda initialization times, this does not appear to be affected by the amount of memory configured: I saw similar initialization times using allocations up to 4096 MB. By comparision, a “traditional” Python Lambda takes just over 100 ms to initialize, even with only 128 MB of memory (which provides roughly 1/16th of a virtual CPU).

So, on balance, probably not something that I’d turn to on a regular basis.

There is, however, one very interesting use-case that the pre-built AWS images open up: using the image to build a Lambda layer. I’ll show an example of that in my next post.