LocalStack + Terraform + CircleCI for Lambda without AWS

On my current project, we’re using LocalStack in lieu of AWS for development and test. In this post, I’ll walk through how we’ve set it up and what we can do with it.

LocalStack?

LocalStack is a collection of AWS-compatible services you can run yourself. It’s meant for development and testing, not as a target to host real production instances of your application.

It doesn’t cover 100% of the AWS alphabet soup, but it implements many of the most popular services.

Getting Started

The easiest way to fire up LocalStack is to run its public Docker image:

docker run -t -i localstack/localstack

While this is technically running a bunch of services, it’s not very useful without a little bit of configuration. Create a docker-compose.yml based off the project’s, and adjust its configuration for your needs. For example, my project currently uses s3, iam, and lambda on LocalStack 0.10.5:


version: "2.1"

services:
  localstack:
    image: localstack/localstack:0.10.5
    ports:
      - "4567-4597:4567-4597"
      - "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
    environment:
      - SERVICES=s3,iam,lambda
      - DEBUG=${DEBUG- }
      - DATA_DIR=${DATA_DIR- }
      - PORT_WEB_UI=${PORT_WEB_UI- }
      - LAMBDA_EXECUTOR=docker-reuse
      - KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
      - DOCKER_HOST=unix:///var/run/docker.sock
    volumes:
      - "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
      - "/var/run/docker.sock:/var/run/docker.sock"

Fire this up with:

TMPDIR=/private$TMPDIR docker-compose up

(The TMPDIR thing is a workaround for a MacOS-specific issue).

To access services with AWS CLI, configure bogus credentials (they aren’t checked) and specify a custom endpoint with the appropriate port.

> aws configure
AWS Access Key ID [None]: foo
AWS Secret Access Key [None]: bar
Default region name [us-east-1]:
Default output format [None]:
> aws --endpoint-url=http://localhost:4572 s3 mb s3://local-bucket
make_bucket: local-bucket
> aws --endpoint-url=http://localhost:4572 s3 ls
2006-02-03 11:45:09 local-bucket

Terraform

We’re using Terraform (a cross-platform Infrastructure-as-Code tool) to provision resources in AWS. We use it for LocalStack too!

Terraform documents how to target LocalStack, which entails customizing the AWS provider with custom endpoints and a few other options.

In order to use the same Terraform scripts to target both LocalStack and AWS (when they use different provider configuration and state backends), we have two separate Terraform projects (i.e. entry points) referencing a shared module:

# remote

terraform {
  backend "s3" {
    bucket = "foobar-terraform"
    key    = "state"
    region = "us-east-1"
  }
}

provider "aws" {
  region = "us-east-1"
}

module "foo-bar" {
  source   = "../foo-bar"
  env_name = terraform.workspace
}
# local

terraform {
  backend "local" {}
}

provider "aws" {
  access_key                  = "mock_access_key"
  region                      = "us-east-1"
  s3_force_path_style         = true
  secret_key                  = "mock_secret_key"
  skip_credentials_validation = true
  skip_metadata_api_check     = true
  skip_requesting_account_id  = true

  endpoints {
    s3     = "http://0.0.0.0:4572"
    lambda = "http://0.0.0.0:4574"
    iam    = "http://0.0.0.0:4593"
  }
}

module "foo-bar" {
  source   = "../foo-bar"
  env_name = "local"
}

Now we can terraform apply to create local instances of our project’s AWS resources.

CircleCI

We can run our backend services locally, but we also want to use them in CI. My efforts to get LocalStack running in CircleCI spanned two approaches: using Circle’s docker and machine executors.

Docker Executor

The Docker executor is faster and cheaper, so I prefer to use it when I can. With it, we can configure multiple Docker containers that our job depends on:


jobs:
  docker-job:
    docker:
      - image: circleci/node:12 # <-- the job steps run in this container
      - image: localstack/localstack:0.10.5 # <-- this is spun up for us by Circle
        environment:
          DEBUG: 1
          SERVICES: s3,iam,lambda

    environment:
      AWS_DEFAULT_REGION: us-east-1
      AWS_SECRET_ACCESS_KEY: foo
      AWS_ACCESS_KEY_ID: bar

    steps:
      - checkout

      - aws-cli/install

      - aws-cli/setup

      - install_terraform

      - run: env

      - run:
          name: Wait for localstack ports
          command: ./wait-for-localstack-ports.sh

      - run:
          name: aws stuff
          command: |
            aws --version
            aws --endpoint-url=http://localhost:4572 s3 mb s3://local-bucket
            aws --endpoint-url=http://localhost:4572 s3 ls

      - run:
          name: terraform
          command: |
            terraform init -input=false
            terraform apply -auto-approve -input=false

      - run:
          name: lambda
          command: |
            aws --endpoint-url=http://localhost:4574 lambda list-functions

This gets us pretty far, but on my project, we hit a wall with Node.js Lambdas in this environment. For a handful of languages, localstack currently requires configuring LAMBDA_EXECUTOR=docker, which spins off separate Docker containers for each Lambda invocation. This doesn’t work trivially on Circle. It may be possible to get something working with [remote docker], but we instead moved to the Machine Executor.

Machine Executor

Circle’s Machine Executor runs in a conventional Linux VM rather than a Docker container. In this environment, we can run docker-compose, just as we do locally. LocalStack’s LAMBDA_EXECUTOR=docker works as expected, and we can invoke our Node.js lambdas functions.


  machine-job:
    machine:
      image: ubuntu-1604:201903-01

    environment:
      AWS_DEFAULT_REGION: us-east-1
      AWS_SECRET_ACCESS_KEY: foo
      AWS_ACCESS_KEY_ID: bar

    steps:
      - checkout

      - run:
          name: docker-compose up
          command: docker-compose up
          background: true
          environment:
            TMPDIR: ~/localstack
            DEBUG: 1

      - aws-cli/install

      - aws-cli/setup

      - install_terraform

      - run:
          name: Wait for localstack
          command: |
            ./wait-for-localstack-ports.sh
            sleep 2
            ./wait-for-localstack-ready.sh

      - run:
          name: aws stuff
          command: |
            aws --version
            aws --endpoint-url=http://localhost:4572 s3 mb s3://local-bucket
            aws --endpoint-url=http://localhost:4572 s3 ls

      - run:
          name: terraform
          command: |
            terraform init -input=false
            terraform apply -auto-approve -input=false

      - run:
          name: lambda
          command: |
            aws --endpoint-url=http://localhost:4574 lambda list-functions
            aws --endpoint-url=http://localhost:4574 lambda invoke --function-name ExampleLambda /dev/stdout

What’s Next

Now that we can run our own AWS services locally and in CI, what can we do with them? Here are a few highlights from my recent experience:

  • Run your full stack locally (local client targeting local server). Lose your internet connection, smile smugly, and keep working.
  • Iterate on terraform infrastructure without affecting cloud resources shared by other team members.
  • Write system tests for your lambda functions, invoking them programmatically with the AWS SDK (as opposed to solely unit-testing their internals).

I hope this post has given you an idea for how you might use LocalStack on your projects. An example repo covering much of the content of this post can be found on GitHub.

Conversation
  • Alex says:

    endpoints {
    s3 = “http://0.0.0.0:4572”
    lambda = “http://0.0.0.0:4574”
    iam = “http://0.0.0.0:4593”
    }

    0.0.0.0 is an invalid, non routable, non addressable, unknown address, used for inbound.
    You don’t need to know 3 ways handshake or python interfaces but what IPv4 it is …

  • Comments are closed.