Lambda Four Ways, a Rosetta Stone for AWS

Tags: , , , , ,

When I write Lambdas professionally, Python is my preferred language. It offers decent performance, a straightforward syntax, and high developer productivity. I’ve also used Java, both in demonstration apps and actual client work. But while I have some familiarity with other languages supported by the platform, I’ve never used them. So, with some downtime, I decided to implement the same Lambda in four different languages: Python, Java, JavaScript, and Go, to get a better sense of their strengths and weaknesses.

The Lambda

The Lambda is simple: it provides access to individual DynamoDB records. It’s intended to be invoked from an API (HTTP) Gateway, using the “proxy” invocation patten: GET retrieves a record, POST creates a new record, and PUT updates an existing record (or creates a new record with a specific ID). The record’s ID, if needed, is provided as part of its path (eg: https://endpoint/id); for PUT and POST, the request body must contain a JSON object.

In pseudocode, it looks something like this:

if invoked via POST:
    generate an ID
    attempt to extract ID from URI, returning 400 error if unable

if invoked via GET:
    retrieve item
    convert to JSON and return as response body
else if invoked via PUT or POST:
    extract data from request body
    insert ID into data (needed for POST, superfluous for PUT)
    convert to DynamoDB format and write
    return ttem (with added ID) as a convenience to caller

For the real code, go here.

Quirks of the various languages

The pseudo-code is simple, and the APIs are the same regardless of how they’re called. But the code doing that calling has quirks that will trip you up, even for a simple Lambda like this one.


My first implementation was in Python. I think that Python is a very developer-friendly language, due to being interactive and having a “batteries included” philosophy: most of the time, you’ll find everything you need in the standard library. I typically develop Python code in a Jupyter notebook, where I can write and execute individual pieces of the code and then build it up into a full program.

Boto3 is the official SDK for Python. This library provides a very thin wrapper over the APIs for all services, and a high-level “resource” interface for DynamoDB and a few other services. For DynamoDB, the resource API provides a Table abstraction, with methods to get and put items; it’s easier to use than the low-level API, especially since it to and from the internal “item” format.

DynamoDB is a schemaless, document-style NoSQL database, in which each field in an item (record) keeps track of its own data type. When you work with the low-level API you have to deal with this yourself: each item is a dict, keyed by the attribute name, and each attribute is itself a dict, where the key is the type and the value is the attribute’s value.

Unfortunately, while boto3 provides functions to translate between “item” format and a “normal” Python dict, the built-in json module can’t stringify those dicts because they contain unsupported types (for example, numbers are stored as Python Decimal values). I looked into alternative JSON libraries, but ended up writing a custom encoder for the built-in module. I don’t think that a real-world service endpoint would need to do this: it would use the data in the DynamoDB item, rather than simply converting it to or from JSON.


I’ve been working with Java since 1999 but don’t often turn to it when developing a Lambda. Partly, this is because of cold start times (which I’ll delve into below), but it’s more about developer time. While Python is easy to develop interactively, and you can edit the Lambda’s source code in the AWS Console (to fix a bug or add a debug-time print()), Java requires a compile/deploy step. This takes maybe 15 seconds, but it’s enough time to break flow.

Unit tests, of course, are one way to minimize these interruptions: you create a test that exercises the handler, with a mock implementation of the AWS SDK, and you can then quickly edit and run those tests without leaving your IDE. But you’ll probably spend more time implementing the mock than the mainline code. Don’t get me wrong: I’m a fan of automated tests, and believe that mocks have a role in testing, but want my tests to focus on business logic rather than plumbing. And there’s not much business logic in this example.

The aws-lambda-java-events library seems like a good reason to write your Lambdas in Java: this library has predefined classes representing typical invocation events, so you don’t need to dig through a Map<String,Object>. However, I tried using the APIGatewayV2HTTPEvent class, and quickly realized that it bore little relation to the actual invocation event (to the point that I’m not sure if it was event the correct class, name notwithstanding, but filed an issue nonetheless).

With that said, my Java implementation most closely resembles the pseudo-code written above.


A decade or so ago, faced with seeming stagnation in the Java language, I gave some thought to which language I might want to specialize in next. Go seemed like the likely candidate, so I dove into learning it. I think I was a little too early: nobody was hiring Go developers at the time (at least not in Philadelphia). I have worked with it professionally since then, and while I don’t see a lot of teams using it, there are definitely more now than then.

One of the things that I don’t like about Go in general is its approach to error handling: it just seems so 1970s to return error codes. And I find the if err … blocks distracting. On the positive side, I’ve found that it leads to shallower call trees, because it’s just too painful to keep passing that error up the stack. And my program is more intentional about how it handles errors: in the other versions I let exceptions propagate, relying on Lambda to return a 500 response to the caller. In the Go implementation, I return my own response for every function that could error-out (even if, in practice, they never will).

I think that the AWS Go SDK is a little clumsy, and could definitely use some better documentation. One thing that I stumbled over was the attributevalue module, which provides methods to convert between DynamoDB items and Go maps. This module is referenced by the code snippets in the DynamoDB Developer Guide, but they don’t show the import statement, and it lives under the top-level “features” package, not in the DynamoDB client package. I’m sure that as I became more familiar with the Go SDK I wouldn’t be bothered by this, but it’s a definite barrier to entry.

Go also suffers from the same edit-compile-deploy cycle as Java. However, unlike Java you’re deploying a statically linked executable. Which means that performance should be (and is!) much higher, especially when it comes to cold starts.


I think everybody knows a little JavaScript. I’m no different: I started playing around with it in the early 2000s, did some front-end code with JQuery, and worked on an early Node.JS project. I’m definitely no expert, and dreaded this implementation. And was pleasantly surprised at just how smoothly it went.

There have been three versions of the JavaScript SDK, so searching the web for examples can be challenging. Worse is that some examples use the ES6 module conventions, while others use CommonJS conventions; if you’re not aware of these conventions, you might wonder why your code doesn’t work. Fortunately, the code snippets in the Developer Guide are complete and show both forms (and makes it clear that AWS prefers ES6 conventions for version 3 of the SDK).

One thing that I found interesting about the v3 SDK was the separation between “bare-bones clients” and “aggregated clients” (doc). The latter appear to mimic the v2 calls, with a client that has discrete function calls for all operations, while the former uses a generic send() operation. The docs subtly push developers toward the bare-bones approach, with phrases like “less code imported” and “more performant,” and my timings certainly bear that out.

My one stumbling block with the JavaScript implementation is that there’s no built-in UUID generator. StackOverflow shows a bunch of roll-your-own implementations, and lots of comments saying that those implementations should use a cryptographic random number generator rather than Math.random() (they should!). Fortunately, I also found a reference to the uuid library, which seems very popular (100 million weekly downloads) and well supported. Experienced JavaScript programmers may be saying “well d’oh!” here.

Effect on Cold Starts

Cold starts – the first invocation of a Lambda in a new execution environment – are the bane of low-latency services. They are a particular issue for languages that dynamically load modules, which includes Java, Python, JavaScript, and .Net (I assume). Languages that build static executables, such as Go, are less affected by cold starts, but still take time to establish connections to network-based resources such as databases or AWS.

The reason that cold starts are a problem is that they can happen at any time: whenever there are more concurrent requests than existing execution environments, Lambda spins up a new environment. And, unfortunately, your users will see that cold start as increased latency in their requests. This is different than a traditional server in an auto-scaling group, which isn’t placed into service until it passes health checks (and, we assume, has performed all of its initialization).

There are things that you can do to minimize cold start times. The easiest is giving extra memory to your Lambda, because the amount of provisioned memory directly controls the amount of CPU it receives: one virtual CPU per 1769 MB of provisioned RAM. You can minimize the amount of work that happens at startup, such as using the “bare-bones” clients in a JavaScript Lambda. You can perform initialization tasks concurrently, as long as your language has a robust threading model. Or you can pick a more performant language.

To that end, the table below contains the results of my performance testing for each language. The numbers represent the average of PUT, POST, and GET requests; the difference between those requests was not enough to warrant breaking them out. And to show the effect of more CPU, I ran each test with 1024 MB and 2048 MB of provisioned memory (the former slightly more than half a vCPU, the latter slightly more than a whole vCPU). All timings are taken from the Lambda logs; real world numbers measured at the browser will be higher.

  1024 MB 2048 MB
  Cold Start Warm Start Cold Start Warm Start
Go 225.52 ms 4.95 ms 164.45 ms 4.93 ms
Java 2,047.99 ms 14.55 ms 1,658.38 ms 18.62 ms
JavaScript 439.64 ms 13.47 ms 435.94 ms 14.26 ms
Python 490.68 ms 7.13 ms 436.00 ms 6.90 ms

One of these things is definitely not like the others. Not only is Java an order of magnitude slower than Go for cold starts, its warm-start performance isn’t great either. As far as I can tell the cold start time is almost entirely due to classloading: creating the client and making the request loads over 5,000 classes. SnapStart is supposed to mitigate the cost of classloading, by saving the state of an initialized JVM. However, I found it to have minimal benefit, probably because nearly half of the classloading happened while making the request. Against that, SnapStart significantly increased deployment time.

Go was also somewhat disappointing, even though it was almost twice as fast as Python and JavaScript. I didn’t dig into its timings to see whether they were primarily program execution time (which includes opening a TLS connection to the DynamoDB endpoint), or base Lambda startup time. I suspect the former: in my experience, a “do-nothing” Lambda takes around 50 milliseconds to run.


Whether cold starts are a problem largely depends on your deployment. If you have high scale (thousands of requests per minute), then they’ll disappear into background noise. Similarly, cold starts are a non-issue for long-running requests (tens of seconds or minutes per request), such as those in many data pipelines. It’s when you have low scale and picky clients that cold starts become noticeable (in other words, the beta users for your new SaaS product).

Python remains my go-to, simply because of familiarity. But JavaScript surprised me: not only was it quite performant, it was easy to implement; no more callback hell. However, I’m still not ready to implement an app-server with any of them.