Managing Your AWS Credentials

by
Tags: , ,
Category:

After my last post, a colleague pinged me with “I thought you used environment variables to manage credentials, so why didn’t you show that?” The short answer is that it would detract from the points I was trying to make. The long answer is rooted in history and not-quite-implemented features, so rates its own post.

It starts with a review of AWS access keys. There are two of these: the “access key” is a user ID and may be shared (for example, when you create a signed S3 URL), while the “secret access key” is a password and should never leave your control. You only need access keys if you’re using the AWS SDK: either in your own programs, or when running any of of the AWS command-line tools. If you just use the AWS Console, you don’t need them.

When writing a program, you might think to hardcode access keys, but this is a really bad idea: anyone with those access keys has the same level of access that you have. If you check those credentials into a publicly-available source repository, someone will find them and use them. Even if you use private repositories, you’re one data breach away from becoming a cryptocurrency miner. And if you’re checking in your root access keys, you may lose your business.

Have I scared you? No? Click on those links, or Google for “AWS access key breach.” Keep reading until you’re scared. Or better, change any access keys that you’ve already committed to source control.

So, given that hardcoding is a bad idea, how should you manage your keys? AWS gives you three options, which are checked in order:

  • Environment variables, specifically AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (along with AWS_SESSION_TOKEN for assumed-role keys, and AWS_DEFAULT_REGION for the region).
  • The $HOME/.aws/config and $HOME/.aws/credentials files, managed by aws configure.
  • An “execution role” associated with an EC2 instance, ECS task, or Lambda function.

For a deployed application, the third option is the way to go. You can specify exactly the permissions that the application needs, without creating an application-specific user and figuring out how to make the credentials available. It’s also more secure: while you can log into the EC2 instance and retrieve credentials from the role, they will quickly expire.

But we’re not talking about deployments, we’re talking about users, who have long-lived credentials and need some place to store them. The AWS CLI documentation instructs you to use aws configure, so we’ll start there. As mentioned above, this produces two output files (locations are for Linux/MacOS; check docs for Windows):

  • $HOME/.aws/config stores configuration information as profiles. Each profile is named, and can specify a range of configuration information (including the region where resources will be created, the default output format for the CLI, and even credentials).
  • $HOME/.aws/credentials just stores credentials: access and secret keys. Each set of credentials is named, and must be associated with a profile in the config file. In practice, you can specify more information in this file, including assumed role configurations, but doing so is explicitly disallowed by the docs.

There are two ways to reference a profile: the first is to provide it when invoking the CLI:

aws s3 ls --profile alfie

The problem with this is that it won’t work with any applications that you write, unless you’ve added a similar invocation argument. A better solution, assuming that you’re not just using the CLI, is the AWS_PROFILE environment variable:

export AWS_PROFILE=alfie

aws s3 ls

This works for the CLI, as well as programs written in Python or Node.JS. However, it doesn’t work with programs written in Java:

2019-09-17 11:53:28,091 [main] WARN  BasicProfileConfigLoader - Your profile name includes a 'profile ' prefix. This is considered part of the profile name in the Java SDK, so you will need to include this prefix in your profile name when you reference this profile from your Java code.

You’ll get this warning whenever you have a config file that contains a non-default profile, whether or not you’re using that profile. Worse, I haven’t been able to figure out how to make the Java SDK actually respect the profile name, and have seen a wide range of resulting error messages.

Since I work extensively with Java, this has kept me from using configuration files. Instead, I define environment variables, in a “sourceable” shell script (and since I work with multiple clients, I have several such scripts, each named after the client). These scripts all have the Linux 0600 protection flags, so they’re readable only by me (and the superuser!). Note that I define both AWS_DEFAULT_REGION, which is used by the CLI and most of the SDKs, along with AWS_REGION, which the Java SDK needs.

 export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXX
 export AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

 export AWS_DEFAULT_REGION=us-east-1
 export AWS_REGION=$AWS_DEFAULT_REGION

In addition to playing nicely with the Java SDK, managing credentials as environment variables can be more secure. As I hinted at earlier, neither of these methods, as described, are terribly secure, because they use plain-text files. And while I’m the only user on my laptop, and its disk is encrypted, those files are still readable by any application that I run.

In the case of the AWS config files, there’s no way around that: the SDK expects those files in a certain location, and it needs them to be plain text. With the environment variable scripts, I have more options. At the basic level, there’s security by obscurity: I can store those files somewhere outside of my home directory.

However, security by obscurity only stops a naive attacker. When I’m truly concerned about security, I encrypt the files. On Linux, GPG is a common tool: once I’ve encrypted the file I can decrypt it and write to the console, where I can then copy-paste into my shell. To prevent the commands from being stored in the shell’s history (and therefore readable), I use export HISTCONTROL=ignorespace (for zsh, setopt histignorespace) and prefix each line with a space.

The main drawback to using environment variables is in switching roles. AWS doesn’t give you a simple way to do this, instead you have to run the sts assume-role command:

aws sts assume-role \
    --role-arn arn:aws:iam::012345678901:role/ExampleAssumableRole \
    --role-session-name kgregory-$$

{
    "Credentials": {
        "AccessKeyId": "ASIA3QHRCQV7F9LBDSVC",
        "SecretAccessKey": "...",
        "SessionToken": "...",
        "Expiration": "2019-09-23T17:31:26Z"
    },
    "AssumedRoleUser": {
        "AssumedRoleId": "AROA3QHRCQV7E5XBZPPW5:kgregory-8505",
        "Arn": "arn:aws:sts::012345678901:assumed-role/ExampleAssumableRole/kgregory-4582"
    }
}

You’d then manually copy and paste the access key, secret key, and session token into export statements for the relevant environment variables. This is onerous, but the CLI’s query feature and a little shell scripting can make it easier (alternatively, I have a script that will assume a role and start a new shell with its credentials):

aws sts assume-role \
    --role-arn arn:aws:iam::012345678901:role/ExampleAssumableRole \
    --role-session-name kgregory-$$ \
    --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' \
    --output text | \
    sed -e 's/^/export AWS_ACCESS_KEY_ID=/' | \
    sed -e 's/\t/\nexport AWS_SECRET_ACCESS_KEY=/' | \
    sed -e 's/\t/\nexport AWS_SESSION_TOKEN=/'

To wrap up: as with many things in the AWS ecosystem, there are multiple ways to manage your user credentials. My preferred choice is environment variables, due to increased security and the historical incompatibility of the various AWS SDKs.