Terraform

Use the Terraform Infrastructure as Code framework with LocalStack

terraform logo

Overview

Terraform allows you to automate the management of AWS resources such as containers, lambda functions and so on by declaring them in the HashiCorp Configuration Language (HCL). On this page we discuss how Terraform and LocalStack can be used together. If you are adapting an existing configuration, you might be able to skip certain steps at your own discretion.

Example

If you have not done so yet, install Terraform.

Using Terraform with LocalStack requires little extra configuration. Apart from some information Terraform expects there are basically only two things to take care of in the configuration.

Before we start changing the configuration, create and change into a new directory for this sample

  1. $ mkdir terraform_quickstart && cd terraform_quickstart

Inside this directory, create a file called main.tf. The following changes go into this file.

Now we are adding a minimal S3 bucket configuration to the main.tf file:

  1. resource "aws_s3_bucket" "test-bucket" {
  2. bucket = "my-bucket"
  3. }

Using the tflocal script

We provide tflocal, a thin wrapper script around the terraform command line client. tflocal takes care of automatically configuring the local service endpoints, which allows you to easily deploy your unmodified Terraform scripts against LocalStack.

You can install the tflocal command via pip (requires a local Python installation):

  1. $ pip install terraform-local

Once installed, the tflocal command should be available, with the same interface as the terraform command line:

  1. $ tflocal --help
  2. Usage: terraform [global options] <subcommand> [args]
  3. ...

Note: Alternatively, you can also manually configure the local endpoints in the provider section of your Terraform script - see further below.

Deployment

After starting LocalStack you can now deploy the S3 bucket via tflocal and interact with the (still empty) S3 bucket via awslocal!

All you need to do is to initialize Terraform:

  1. $ tflocal init

… and then provision s3 bucket specified in the configuration:

  1. $ tflocal apply

Manual Configuration

As an alternative to using the tflocal script, you may also manually configure the local service endpoints and credentials. We’ll walk through the detailed steps in the following sections.

General Configuration

First, we have to specify mock credentials for the AWS provider:

  1. provider "aws" {
  2. access_key = "test"
  3. secret_key = "test"
  4. region = "us-east-1"
  5. }

Request Management

Second, we need to avoid issues with routing and authentication (as we do not need it). Therefore we need to supply some general parameters:

  1. provider "aws" {
  2. access_key = "test"
  3. secret_key = "test"
  4. region = "us-east-1"
  5. # only required for non virtual hosted-style endpoint use case.
  6. # https://registry.terraform.io/providers/hashicorp/aws/latest/docs#s3_force_path_style
  7. s3_force_path_style = true
  8. skip_credentials_validation = true
  9. skip_metadata_api_check = true
  10. skip_requesting_account_id = true
  11. }

Services

Additionally, we have to point the individual services to LocalStack. In case of S3, this looks like the following snippet, in this case we opted to use the virtual hosted-style endpoint.

  1. endpoints {
  2. s3 = "http://s3.localhost.localstack.cloud:4566"
  3. }

Note: In case of issues resolving this DNS record, we can fallback to http://localhost:4566 in combination with the provider setting s3_force_path_style = true. The S3 service endpoint is slightly different from the other service endpoints, because AWS is deprecating path-style based access for hosting buckets.

Final Configuration

The final (minimal) configuration to deploy an S3 bucket thus looks like this

  1. provider "aws" {
  2. access_key = "mock_access_key"
  3. secret_key = "mock_secret_key"
  4. region = "us-east-1"
  5. s3_force_path_style = true
  6. skip_credentials_validation = true
  7. skip_metadata_api_check = true
  8. skip_requesting_account_id = true
  9. endpoints {
  10. s3 = "http://s3.localhost.localstack.cloud:4566"
  11. }
  12. }
  13. resource "aws_s3_bucket" "test-bucket" {
  14. bucket = "my-bucket"
  15. }

Endpoint Configuration

Below is a configuration example with additional service endpoints. Again, these provider configurations should no longer be required if you use the tflocal script (see above).

  1. provider "aws" {
  2. access_key = "test"
  3. secret_key = "test"
  4. region = "us-east-1"
  5. s3_force_path_style = false
  6. skip_credentials_validation = true
  7. skip_metadata_api_check = true
  8. skip_requesting_account_id = true
  9. endpoints {
  10. apigateway = "http://localhost:4566"
  11. apigatewayv2 = "http://localhost:4566"
  12. cloudformation = "http://localhost:4566"
  13. cloudwatch = "http://localhost:4566"
  14. dynamodb = "http://localhost:4566"
  15. ec2 = "http://localhost:4566"
  16. es = "http://localhost:4566"
  17. elasticache = "http://localhost:4566"
  18. firehose = "http://localhost:4566"
  19. iam = "http://localhost:4566"
  20. kinesis = "http://localhost:4566"
  21. lambda = "http://localhost:4566"
  22. rds = "http://localhost:4566"
  23. redshift = "http://localhost:4566"
  24. route53 = "http://localhost:4566"
  25. s3 = "http://s3.localhost.localstack.cloud:4566"
  26. secretsmanager = "http://localhost:4566"
  27. ses = "http://localhost:4566"
  28. sns = "http://localhost:4566"
  29. sqs = "http://localhost:4566"
  30. ssm = "http://localhost:4566"
  31. stepfunctions = "http://localhost:4566"
  32. sts = "http://localhost:4566"
  33. }
  34. }

Further Reading

For more examples, you can take a look at our Terraform sample or the Terraform LocalStack section.

Community Resources

Last modified May 17, 2022: fix capitalization of LocalStack in affected files (#157) (6206611c)