LocalStack: Develop and Test Lambdas Locally

LocalStack: Develop and Test Lambdas Locally

Anthony RimetThibault Barrat
• 7 min read

Lambda functions are great for offloading complex processing from your main application. But developing and testing them directly on AWS can be a pain. SAM and LocalStack can improve the developer experience significantly by allowing you to define your infrastructure as code and run everything locally.

This article shows you how we set up this stack and the problems we encountered (spoiler: Lambda Layers on LocalStack are not free). In our opinion, LocalStack has changed the game: we can now develop and test our Lambdas locally.

What Exactly Is Lambda?

AWS Lambda is a serverless environment: a computing service that executes code without the need to manage servers. You write code, AWS executes it when needed, and you only pay for the execution time.

And behind the scenes, it’s basically just a function.

def lambda_handler(event, context):
    # event contains input data
    # context provides information about execution
    return {
        'statusCode': 200,
        'body': json.dumps({'message': 'Hello from Lambda!'})
    }

In our case, we use Lambdas to:

  • Communicate with S3
  • Send emails via CRONs
  • Perform calculations that would take too long in SQL

Lambdas can be created via the AWS web interface, or via the aws command line. In both cases, it’s a pain: you have to define the infrastructure, manage permissions, deploy the code, etc.

This is where SAM becomes indispensable.

SAM: Infrastructure For Non-Devops

AWS SAM (Serverless Application Model) is a framework that drastically simplifies the creation of Lambdas. Everything is done via a YAML file.

A typical SAM project looks like this:

my-project/
├── template.yaml          # Defines the entire infrastructure
├── samconfig.toml         # Deployment configuration
├── src/handlers/          # Your Python code
└── layers/                # Code shared between Lambdas

The template.yaml file is the core of the project. Here is a minimalist example:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Resources:
  HelloWorldFunction:
    Type: AWS::Serverless::Function
    Properties:
      FunctionName: hello-world-function
      CodeUri: src/handlers/hello_world/
      Handler: app.lambda_handler
      Runtime: python3.12
      Events:
        HelloWorld:
          Type: Api
          Properties:
            Path: /hello
            Method: get

That’s it! SAM will create the Lambda with a simple command.

sam build   # Build the project
sam deploy  # Deploy to AWS

Of course, you can add S3, API Gateways, IAM permissions—everything is managed in YAML. We won’t go into detail here, as the SAM documentation is very comprehensive (although good luck finding the information you need easily).

How Does Sam Know Where To Deploy?

SAM does not “guess” the target. It relies on two things:

  • Your AWS credentials/settings (profile/environment variables) to know which account and region to deploy to.
  • A samconfig.toml file (or command line options) to store the stack name, region, artifacts bucket, etc.

Here’s how it works:

  1. The first time, run a guided deployment that will ask the right questions and save the answers.

    sam deploy --guided
  2. SAM saves these choices in samconfig.toml and reuses them for subsequent sam deploy commands.

    Here is a minimal example of samconfig.toml (generated by the guided deployment):

    version = 0.1
    
    [default.deploy.parameters]
    stack_name = "hello-world"
    region = "eu-west-1"
    resolve_s3 = true # creates or selects a bucket for artifacts
    capabilities = "CAPABILITY_IAM"
  3. The account and region come from the aws CLI that SAM uses under the hood: via --profile/--region or the variables AWS_PROFILE, AWS_ACCESS_KEY_ID/SECRET_ACCESS_KEY, AWS_DEFAULT_REGION.

  4. Deployment is performed by CloudFormation in the selected account/region, with the specified stack name.

Well, that’s cool, but it still doesn’t solve our main problem. Every time we want to test something, we have to deploy it on a real AWS instance. LocalStack solves this problem.

LocalStack : AWS on your machine

LocalStack is a cloud service emulator that runs locally in a container or in your continuous integration environment.

  • Free: No skyrocketing AWS bills (though premium features exist, which we’ll discuss later)
  • Fast: Deploy in 2 seconds instead of 2 minutes
  • Safe: You don’t break anything in the real AWS environment

For us, it transforms how we develop. We can test the Lambda invocation directly, rather than just the code contained in the Lambda (there’s a subtle difference!).

Install with Docker

A simple docker-compose.yml is sufficient.

version: '3.8'

services:
  localstack:
    image: localstack/localstack:latest
    ports:
      - "4566:4566" # Main LocalStack endpoint
    environment:
      - SERVICES=lambda,apigateway,s3,cloudformation,logs
      - DEBUG=1
      - LAMBDA_EXECUTOR=docker
      - AWS_DEFAULT_REGION=eu-west-1
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"

Start LocalStack

Start a local AWS cloud with:

docker compose up -d

LocalStack now runs on http://localhost:4566.

Install the CLI Tools

To deploy to LocalStack instead of AWS, we use the awslocal and samlocal command-line tools, which are wrappers around the aws and sam commands that point directly to LocalStack.

pip install aws-sam-cli awscli-local

LocalStack Development Workflow

Now that both Localstack and SAM are installed, the development workflow for lambdas becomes trivial.

  1. Code your Lambda in src/handlers/ and add it to template.yaml

  2. Build

    samlocal build
  3. Deploy on LocalStack

    samlocal deploy
  4. Test

    awslocal lambda invoke \
       --function-name hello-world-function \
       response.json
    
    cat response.json  # See the result

Need to make changes? Modify your code, run samlocal build && samlocal deploy again, and it will be redeployed in seconds. You can iterate quickly.

Layers: Sharing Code Between Lambdas

After a while, you will have several Lambdas. And these Lambdas may share code—utility functions, response formatting, etc.

The problem is that Lambdas are independent. If you need to share a function between two Lambdas, you have to copy and paste it into each Lambda.

Actually, that’s not quite true—Lambda Layers solve this problem. A Layer is a package of reusable code that multiple Lambdas can share.

In our case, we’re going to create a Layer with our response formatting functions, so that all our Lambdas return the same JSON format.

A typical layer is composed of three files:

my-project/
└── layers/
    └── custom_utils/
        ├── __init__.py
        ├── display.py          # Your utility functions
        └── requirements.txt    # Dependencies (if any)

Here are the utility functions we want to share:

# layers/custom_utils/display.py
import json

def format_response(status_code, data):
    return {
        "statusCode": status_code,
        "headers": {"Content-Type": "application/json"},
        "body": json.dumps(data)
    }

def get_greeting(name):
    return f"Hello, {name}!"

To use this layer, we must declare it in template.yaml:

Resources:
  CustomUtilsLayer:
    Type: AWS::Serverless::LayerVersion
    Properties:
      LayerName: custom-utils-layer
      ContentUri: layers/custom_utils/
      CompatibleRuntimes:
        - python3.12

  HelloWorldFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: src/handlers/hello_world/
      Handler: app.lambda_handler
      Layers:
        - !Ref CustomUtilsLayer  # To associate the Layer

Then you can import the utility function directly in your lambda:

# src/handlers/hello_world/app.py
from custom_utils import format_response, get_greeting

def lambda_handler(event, context):
    name = event.get("queryStringParameters", {}).get("name", "World")
    greeting = get_greeting(name)
    return format_response(200, {"message": greeting})

Simple, elegant, reusable. Except that… it doesn’t work on LocalStack 😅

The Problem with Layers on LocalStack

On AWS, when you use a Layer, AWS automatically mounts it in /opt/python (if you’re developing in Python, of course) and everything works. On LocalStack… not so much. You deploy, you test, and boom:

ModuleNotFoundError: No module named ‘custom_utils’

LocalStack creates the Layer and associates it with your Lambda, but it doesn’t mount it in the execution container. Why? Because it’s a premium feature of LocalStack Pro.

In our case, upgrading to Localstack Pro is not an option. The solution? A little Docker workaround. Not very elegant, but it works!

The Workaround: Mounting Layers Manually

The idea is simple: tell LocalStack to mount your layers/ folder directly into the Lambda containers via Docker volumes.

Modify your docker-compose.yml:

services:
  localstack:
    image: localstack/localstack:latest
    ports:
      - "4566:4566"
    environment:
      - SERVICES=lambda,apigateway,s3,cloudformation,logs
      - DEBUG=1
      - LAMBDA_EXECUTOR=docker
      - LAMBDA_DOCKER_NETWORK=localstack-sam-network
      - AWS_DEFAULT_REGION=eu-west-1
      # 🔧 THE WORKAROUND: Mount the layers in the Lambda containers.
      - LAMBDA_DOCKER_FLAGS=-v /var/www/hackday/localstack-test/layers:/opt/python:ro
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
      # 🔧 THE WORKAROUND: Mount the layers in LocalStack
      - "/var/www/hackday/localstack-test/layers:/opt/python:ro"
    networks:
      - localstack-sam-network

networks:
  localstack-sam-network:
    driver: bridge

Important: Replace /var/www/hackday/localstack-test/layers with the absolute path to your layers folder.

How does it work?

  1. LAMBDA_DOCKER_FLAGS tells LocalStack: “When you create a Lambda container, mount this volume in it.”
  2. The volume mounts your layers/ folder in /opt/python of the Lambda container.
  3. Python can now import from /opt/python/custom_utils/.

Limitations of the workaround

Let’s be honest, this workaround has its flaws:

  1. No versioning: All Lambdas use the same version of the Layer.
  2. Not identical to AWS: On AWS, the structure is different. You can always pay for the premium version to get full parity.
  3. No hot reload: Even with a mounted volume, LocalStack “freezes” the package at deployment time. So if you modify the Layer code, you have to redeploy the Lambda. The premium version of LocalStack may support hot reload, but we haven’t tested it.

Conclusion

That’s how we set up our Lambda dev environment. It’s not perfect—the Layers workaround is a hack—but it works and saves us a ton of time.

The complete setup:

  1. LocalStack to emulate AWS locally
  2. SAM to define the infrastructure
  3. A Docker workaround for Layers
  4. Makefiles for automation

What’s changed in our daily routine:

  • End-to-end feature development done locally
  • Unlimited testing without watching the AWS bill skyrocket
  • Everyone has the same development environment (thanks to docker-compose)

For our use case (SQL backend + Hasura + Python Lambdas for complex processing), this is the ideal setup. We keep Hasura for standard CRUD operations, and we bring out the Lambda artillery when we need Python.

If you’re in a similar situation, we strongly encourage you to try LocalStack. Yes, there are a few hacks involved (like the Layers workaround), but the productivity gains are huge.

And special mention to SAM, which makes infrastructure management super simple.

You can find the complete code of our example on GitHub

Authors

Anthony Rimet

Full-stack web developer at marmelab, Anthony seeks to improve and learn every day. He likes basketball, motorsports and is a big Harry Potter fan

Thibault Barrat

Full-stack web developer at marmelab, Thibault also manages a local currency called "Le Florain", used by dozens of French shops around Nancy to encourage local exchanges.

Comments