Hot Swapping

Hot code swapping for Lambda functions using LocalStack’s code mounting

Complexity★☆☆☆☆
Time to read5 minutes
Editioncommunity/pro
Platformany

Quickly iterating over Lambda function code can be quite cumbersome, as you need to deploy your function on every change. With LocalStack you can avoid this hurdle by mounting your code directly from the source folder. This way, any saved change inside your source file directly affects the already deployed Lambda function – without any redeployment!

Covered Topics

Application Configuration Examples:

Deployment Configuration Examples:

Useful Links

Application Configuration Examples

Code hot-swapping for JVM Lambdas

Since lambda containers lifetime is usually limited, regular hot code swapping techniques are not applicable here.

In our implementation, we will be watching for fs changes under the project folder, then build a FatJar, unzip it, and mount it into the Lambda Docker Container.

We assume you already have:

  • watchman
  • configured JVM project capable of building FatJars using your preferred build tool

First, create a watchman wrapper by using one of our examples

Don’t forget to adjust permissions:

  1. $ chmod +x bin/watchman.sh

Now configure your build tool to unzip the FatJar to some folder, which will be then mounted to LocalStack. We are using Gradle build tool to unpack the FatJar into the build/hot folder:

  1. // We assume you are using something like `Shadow` plugin that comes with `shadowJar` task
  2. task buildHot(type: Copy) {
  3. from zipTree("${project.buildDir}/libs/${project.name}-all.jar")
  4. into "${project.buildDir}/hot"
  5. }
  6. buildHot.dependsOn shadowJar

Now run the following command to start watching your project in a hot-swapping mode:

  1. $ bin/watchman.sh src "./gradlew buildHot"

Please note that you still need to configure your deployment tool to use local code mounting. Read the “Deployment Configuration Examples” for more information.

Code hot-swapping for Python Lambdas

We will show you how you can do this with a simple example function, taken directly from the AWS Lambda developer guide.

You can check out that code, or use your own lambda functions to follow along. To use the example just do:

  1. $ cd /tmp
  2. $ git clone git@github.com:awsdocs/aws-doc-sdk-examples.git

Starting up LocalStack

First, we need to make sure we start LocalStack with the right configuration. This is as simple as setting LAMBDA_REMOTE_DOCKER(see the Configuration Documentation for more information):

  1. $ LAMBDA_REMOTE_DOCKER=0 localstack start

Accordingly, if you are launching LocalStack via Docker or Docker Compose:

  1. #docker-compose.yml
  2. services:
  3. localstack:
  4. ...
  5. environment:
  6. ...
  7. - LAMBDA_REMOTE_DOCKER=false

Creating the Lambda Function

To create the Lambda function, you just need to take care of two things:

  1. Deploy via an S3 Bucket. You need to use the magic variable __local__ as the bucket.
  2. Set the S3 key to the path of the directory your lambda function resides in. The handler is then referenced by the filename of your lambda code and the function in that code that needs to be invoked.

So, using the AWS example, this would be:

  1. $ awslocal lambda create-function --function-name my-cool-local-function \
  2. --code S3Bucket="__local__",S3Key="/tmp/aws-doc-sdk-examples/python/example_code/lambda/boto_client_examples" \
  3. --handler lambda_handler_basic.lambda_handler \
  4. --runtime python3.8 \
  5. --role cool-stacklifter

You can also check out some of our “Deployment Configuration Examples”.

We can also quickly make sure that it works by invoking it with a simple payload:

  1. $ awslocal lambda invoke --function-name my-cool-local-function --payload '{"action": "square", "number": 3}' output.txt

The invocation returns itself returns:

  1. {
  2. "StatusCode": 200,
  3. "LogResult": "",
  4. "ExecutedVersion": "$LATEST"
  5. }

and output.txt contains:

  1. {"result":9}

Changing things up

Now, that we got everything up and running, the fun begins. Because the function is now mounted as a file in the executing container, any change that we save on the file will be there in an instant.

For example, we can now make a minor change to the API and replace the response in line 41 with the following:

  1. response = {'math_result': result}

Without redeploying or updating the function, the result of the previous request will look like this:

  1. {"math_result":9}

Cool!

Usage with Virtualenv

For virtualenv-driven projects, all dependencies should be made available to the Python interpreter at runtime. There are different ways to achieve that, including:

  • expanding the Python module search path in your Lambda handler
  • creating a watchman script to copy the libraries
Expanding the module search path in your Lambda handler

The easiest approach is to expand the module search path (sys.path) and add the site-packages folder inside the virtualenv. We can add the following two lines of code at the top of the Lambda handler script:

  1. import sys, glob
  2. sys.path.insert(0, glob.glob(".venv/lib/python*/site-packages")[0])
  3. ...
  4. import some_lib_from_virtualenv # import your own modules here

This way you can easily import modules from your virtualenv, without having to change the file system layout.

Note: As an alternative to modifying sys.path, you could also set the PYTHONPATH environment variable when creating your Lambda function, to add the additional path.

Using a watchman script to copy libraries

Another alternative is to implement a watchman script that will be preparing a special folder for hot code swapping.

In our example, we are using build/hot folder as a mounting point for our Lambdas.

First, create a watchman wrapper by using one of our examples

After that, you can use the following Makefile snippet, or implement another shell script to prepare the codebase for hot swapping:

  1. BUILD_FOLDER ?= build
  2. PROJECT_MODULE_NAME = my_project_module
  3. build-hot:
  4. rm -rf $(BUILD_FOLDER)/hot && mkdir -p $(BUILD_FOLDER)/hot
  5. cp -r $(VENV_DIR)/lib/python$(shell python --version | grep -oE '[0-9]\.[0-9]')/site-packages/* $(BUILD_FOLDER)/hot/
  6. cp -r $(PROJECT_MODULE_NAME) $(BUILD_FOLDER)/hot/$(PROJECT_MODULE_NAME)
  7. cp *.toml $(BUILD_FOLDER)/hot
  8. watch:
  9. bin/watchman.sh $(PROJECT_MODULE_NAME) "make build-hot"
  10. .PHONY: build-hot watch

To run the example above, run make watch. The script is copying the project module PROJECT_MODULE_NAME along with all dependencies into the build/hot folder, which is then mounted into LocalStack’s Lambda container.

Deployment Configuration Examples

Serverless Framework Configuration

Enable local code mounting

  1. custom:
  2. localstack:
  3. ...
  4. lambda:
  5. mountCode: true
  6. # or if you need to enable code mounting only for specific stages
  7. custom:
  8. stages:
  9. local:
  10. mountCode: true
  11. testing:
  12. mountCode: false
  13. localstack:
  14. stages:
  15. - local
  16. - testing
  17. lambda:
  18. mountCode: ${self:custom.stages.${opt:stage}.mountCode}

Pass LAMBDA_MOUNT_CWD env var with path to the built code directory (in our case to the folder with unzipped FatJar):

  1. $ LAMBDA_MOUNT_CWD=$(pwd)/build/hot serverless deploy --stage local

AWS Cloud Development Kit (CDK) Configuration

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  25. 25
  26. 26
  27. 27
  28. 28
  29. 29
  30. 30
  31. 31
  32. 32
  33. 33
  34. 34
  35. 35
  36. 36
  37. 37
  38. 38
  39. 39
  40. 40
  1. package org.localstack.cdkstack
  2. import java.util.UUID
  3. import software.amazon.awscdk.core.Construct
  4. import software.amazon.awscdk.core.Duration
  5. import software.amazon.awscdk.core.Stack
  6. import software.amazon.awscdk.services.lambda.
  7. import software.amazon.awscdk.services.s3.Bucket
  8. private val STAGE = System.getenv(“STAGE”) ?: local
  9. private val LAMBDA_MOUNT_CWD = System.getenv(“LAMBDA_MOUNT_CWD”) ?: “”
  10. private const val JAR_PATH = build/libs/localstack-sampleproject-all.jar
  11. class ApplicationStack(parent: Construct, name: String) : Stack(parent, name) {
  12. init {
  13. val lambdaCodeSource = this.buildCodeSource()
  14. SingletonFunction.Builder.create(this, ExampleFunctionOne”)
  15. .code(lambdaCodeSource)
  16. .handler(“org.localstack.sampleproject.api.LambdaApi”)
  17. .environment(mapOf(“FUNCTION_NAME to functionOne”))
  18. .timeout(Duration.seconds(30))
  19. .runtime(Runtime.JAVA_11)
  20. .uuid(UUID.randomUUID().toString())
  21. .build()
  22. }
  23. /**
  24. Mount code for hot-reloading when STAGE=local
  25. */
  26. private fun buildCodeSource(): Code {
  27. if (STAGE == local”) {
  28. val bucket = Bucket.fromBucketName(this, HotReloadingBucket”, local“)
  29. return Code.fromBucket(bucket, LAMBDA_MOUNT_CWD)
  30. }
  31. return Code.fromAsset(JAR_PATH)
  32. }
  33. }

Then to bootstrap and deploy the stack run the following shell script

  1. $ STAGE=local && LAMBDA_MOUNT_CWD=$(pwd)/build/hot &&
  2. cdklocal bootstrap aws://000000000000/$(AWS_REGION) && \
  3. cdklocal deploy

Terraform Configuration

  1. variable "STAGE" {
  2. type = string
  3. default = "local"
  4. }
  5. variable "AWS_REGION" {
  6. type = string
  7. default = "us-east-1"
  8. }
  9. variable "JAR_PATH" {
  10. type = string
  11. default = "build/libs/localstack-sampleproject-all.jar"
  12. }
  13. variable "LAMBDA_MOUNT_CWD" {
  14. type = string
  15. }
  16. provider "aws" {
  17. access_key = "test_access_key"
  18. secret_key = "test_secret_key"
  19. region = var.AWS_REGION
  20. s3_force_path_style = true
  21. skip_credentials_validation = true
  22. skip_metadata_api_check = true
  23. skip_requesting_account_id = true
  24. endpoints {
  25. apigateway = var.STAGE == "local" ? "http://localhost:4566" : null
  26. cloudformation = var.STAGE == "local" ? "http://localhost:4566" : null
  27. cloudwatch = var.STAGE == "local" ? "http://localhost:4566" : null
  28. cloudwatchevents = var.STAGE == "local" ? "http://localhost:4566" : null
  29. iam = var.STAGE == "local" ? "http://localhost:4566" : null
  30. lambda = var.STAGE == "local" ? "http://localhost:4566" : null
  31. s3 = var.STAGE == "local" ? "http://localhost:4566" : null
  32. }
  33. }
  34. resource "aws_iam_role" "lambda-execution-role" {
  35. name = "lambda-execution-role"
  36. assume_role_policy = <<EOF
  37. {
  38. "Version": "2012-10-17",
  39. "Statement": [
  40. {
  41. "Action": "sts:AssumeRole",
  42. "Principal": {
  43. "Service": "lambda.amazonaws.com"
  44. },
  45. "Effect": "Allow",
  46. "Sid": ""
  47. }
  48. ]
  49. }
  50. EOF
  51. }
  52. resource "aws_lambda_function" "exampleFunctionOne" {
  53. s3_bucket = var.STAGE == "local" ? "__local__" : null
  54. s3_key = var.STAGE == "local" ? var.LAMBDA_MOUNT_CWD : null
  55. filename = var.STAGE == "local" ? null : var.JAR_PATH
  56. function_name = "ExampleFunctionOne"
  57. role = aws_iam_role.lambda-execution-role.arn
  58. handler = "org.localstack.sampleproject.api.LambdaApi"
  59. runtime = "java11"
  60. timeout = 30
  61. source_code_hash = filebase64sha256(var.JAR_PATH)
  62. environment {
  63. variables = {
  64. FUNCTION_NAME = "functionOne"
  65. }
  66. }
  67. }
  1. $ terraform init && \
  2. terraform apply -var "STAGE=local" -var "LAMBDA_MOUNT_CWD=$(pwd)/build/hot"

Last modified July 26, 2022: fix some typos (#214) (6ab8502d)