Production Installation

Installing Boundary in a production setting requires prerequisits for infrastructure. At the most basic level, Boundary operators should run a minimum of 3 controllers and 3 workers. Running 3 of each server type gives a fundamental level of high availability for the control plane (controller), as well as bandwidth for number of sessions on the data plane (worker). Both server type should be ran in a fault tolerant setting, that is, in a self-healing environment such as an auto-scaling group. The documentation here does not cover self-healing infrastructure and assumes the operator has their preferred scheduling methods for these environments.

Network Requirements

  • Client -> Controller port is :9200
  • Controller -> Worker port is :9201
  • Client must have access to Controller on :9200
  • :9201 must be open between Worker and Controller
  • Workers must have a route and port access to the targets which they service

Architecture

The general architecture for the server infrastructure requires 3 controllers and 3 workers. The documentation here uses virtual machines running on Amazon EC2 as the example environment, but this use case can be extrapolated to almost any cloud platform to suit operator needs:

Production - 图1

As shown above, Boundary is broken up into its controller and worker server components across 3 EC2 instances, in 3 separate subnets, in three separate availability zones, with the controller API and UI being publically exposed by an application load balancer (ALB). The worker and controller VM’s are in independent auto-scaling groups, allowing them to maintain their exact capacity.

Boundary requires an external Postgres and KMS. In the example above, we’re using AWS managed services for these components. For Postgres, we’re using RDS and for KMS we’re using Amazon’s Key Management Service.

Architecture Breakdown

API and Console Load Balancer

Load balancing the controller allows operators to secure the ingress to the Boundary system. We recommend placing all Boundary server’s in private networks and using load balancing tecniques to expose services such as the API and administrative console to public networks. In the production architecture, we recommend load balancing using a layer 7 load balancer and further constraining ingress to that load balancer with layer 4 constraints such as security groups or IP tables.

For general configuration, we recommend the following:

  • HTTPS listener with valid TLS certificate for the domain it’s serving or TLS passthrough
  • Health check port should use :9200 with TCP protocol

Controller Configuration

When running Boundary controller as a service we recommend storing the file at /etc/boundary-controller.hcl. A boundary user and group should exist to manage this configuration file and to further restrict who can read and modify it.

Example controller configuration:

  1. # Disable memory lock: https://www.man7.org/linux/man-pages/man2/mlock.2.html
  2. disable_mlock = true
  3. telemetry {
  4. # prometheus is not currently implemented
  5. prometheus_retention_time = "24h"
  6. disable_hostname = true
  7. }
  8. # Controller configuration block
  9. controller {
  10. # This name attr must be unique!
  11. name = "demo-controller-${count.index}"
  12. # Description of this controller
  13. description = "A controller for a demo!"
  14. }
  15. # API listener configuration block
  16. listener "tcp" {
  17. # Should be the address of the NIC that the controller server will be reached on
  18. address = "${self.private_ip}:9200"
  19. # The purpose of this listener block
  20. purpose = "api"
  21. # Should be enabled for production installs
  22. tls_disable = true
  23. # Enable CORS for the Admin UI
  24. cors_enabled = true
  25. cors_allowed_origins = ["*"]
  26. }
  27. # Data-plane listener configuration block (used for worker coordination)
  28. listener "tcp" {
  29. # Should be the IP of the NIC that the worker will connect on
  30. address = "${self.private_ip}:9201"
  31. # The purpose of this listener
  32. purpose = "cluster"
  33. # Should be enabled for production installs
  34. tls_disable = true
  35. }
  36. # Root KMS configuration block: this is the root key for Boundary
  37. # Use a production KMS such as AWS KMS in production installs
  38. kms "aead" {
  39. purpose = "root"
  40. aead_type = "aes-gcm"
  41. key = "sP1fnF5Xz85RrXyELHFeZg9Ad2qt4Z4bgNHVGtD6ung="
  42. key_id = "global_root"
  43. }
  44. # Worker authorization KMS
  45. # Use a production KMS such as AWS KMS for production installs
  46. # This key is the same key used in the worker configuration
  47. kms "aead" {
  48. purpose = "worker-auth"
  49. aead_type = "aes-gcm"
  50. key = "8fZBjCUfN0TzjEGLQldGY4+iE9AkOvCfjh7+p0GtRBQ="
  51. key_id = "global_worker-auth"
  52. }
  53. # Recovery KMS block: configures the recovery key for Boundary
  54. # Use a production KMS such as AWS KMS for production installs
  55. kms "aead" {
  56. purpose = "recovery"
  57. aead_type = "aes-gcm"
  58. key = "8fZBjCUfN0TzjEGLQldGY4+iE9AkOvCfjh7+p0GtRBQ="
  59. key_id = "global_recovery"
  60. }
  61. # Database URL for postgres. This can be a direct "postgres://"
  62. # URL, or it can be "file://" to read the contents of a file to
  63. # supply the url, or "env://" to name an environment variable
  64. # that contains the URL.
  65. database {
  66. url = "postgresql://boundary:boundarydemo@${aws_db_instance.boundary.endpoint}/boundary"
  67. }
  1. # Disable memory lock: https://www.man7.org/linux/man-pages/man2/mlock.2.htmldisable_mlock = true
  2. telemetry { # prometheus is not currently implemented prometheus_retention_time = "24h" disable_hostname = true}
  3. # Controller configuration blockcontroller { # This name attr must be unique! name = "demo-controller-${count.index}" # Description of this controller description = "A controller for a demo!"}
  4. # API listener configuration blocklistener "tcp" { # Should be the address of the NIC that the controller server will be reached on address = "${self.private_ip}:9200" # The purpose of this listener block purpose = "api" # Should be enabled for production installs tls_disable = true # Enable CORS for the Admin UI cors_enabled = true cors_allowed_origins = ["*"]}
  5. # Data-plane listener configuration block (used for worker coordination)listener "tcp" { # Should be the IP of the NIC that the worker will connect on address = "${self.private_ip}:9201" # The purpose of this listener purpose = "cluster" # Should be enabled for production installs tls_disable = true}
  6. # Root KMS configuration block: this is the root key for Boundary# Use a production KMS such as AWS KMS in production installskms "aead" { purpose = "root" aead_type = "aes-gcm" key = "sP1fnF5Xz85RrXyELHFeZg9Ad2qt4Z4bgNHVGtD6ung=" key_id = "global_root"}
  7. # Worker authorization KMS# Use a production KMS such as AWS KMS for production installs# This key is the same key used in the worker configurationkms "aead" { purpose = "worker-auth" aead_type = "aes-gcm" key = "8fZBjCUfN0TzjEGLQldGY4+iE9AkOvCfjh7+p0GtRBQ=" key_id = "global_worker-auth"}
  8. # Recovery KMS block: configures the recovery key for Boundary# Use a production KMS such as AWS KMS for production installskms "aead" { purpose = "recovery" aead_type = "aes-gcm" key = "8fZBjCUfN0TzjEGLQldGY4+iE9AkOvCfjh7+p0GtRBQ=" key_id = "global_recovery"}
  9. # Database URL for postgres. This can be a direct "postgres://"# URL, or it can be "file://" to read the contents of a file to# supply the url, or "env://" to name an environment variable# that contains the URL.database { url = "postgresql://boundary:boundarydemo@${aws_db_instance.boundary.endpoint}/boundary"}

Worker Configuration

  1. listener "tcp" {
  2. purpose = "proxy"
  3. tls_disable = true
  4. }
  5. worker {
  6. # Name attr must be unique
  7. name = "demo-worker-${count.index}"
  8. description = "A default worker created demonstration"
  9. controllers = [
  10. "${aws_instance.controller[0].private_ip}",
  11. "${aws_instance.controller[1].private_ip}",
  12. "${aws_instance.controller[2].private_ip}"
  13. ]
  14. }
  15. # must be same key as used on controller config
  16. kms "aead" {
  17. purpose = "worker-auth"
  18. aead_type = "aes-gcm"
  19. key = "8fZBjCUfN0TzjEGLQldGY4+iE9AkOvCfjh7+p0GtRBQ="
  20. key_id = "global_worker-auth"
  21. }
  1. listener "tcp" { purpose = "proxy" tls_disable = true}
  2. worker { # Name attr must be unique name = "demo-worker-${count.index}" description = "A default worker created demonstration" controllers = [ "${aws_instance.controller[0].private_ip}", "${aws_instance.controller[1].private_ip}", "${aws_instance.controller[2].private_ip}" ]}
  3. # must be same key as used on controller configkms "aead" { purpose = "worker-auth" aead_type = "aes-gcm" key = "8fZBjCUfN0TzjEGLQldGY4+iE9AkOvCfjh7+p0GtRBQ=" key_id = "global_worker-auth"}

name must be unique!

Installation

TYPE below can be either worker or controller.

  1. /etc/boundary-${TYPE}.hcl: Configuration file for the boundary service See above example configurations.

  2. /usr/local/bin/boundary: The Boundary binary Can build from https://github.com/hashicorp/boundary or download binary from our release pages.

  3. /etc/systemd/system/boundary-${TYPE}.service: Systemd unit file for the Boundary service Example:

  1. [Unit]
  2. Description=${NAME} ${TYPE}
  3. [Service]
  4. ExecStart=/usr/local/bin/${NAME} ${TYPE} -config /etc/${NAME}-${TYPE}.hcl
  5. User=boundary
  6. Group=boundary
  7. LimitMEMLOCK=infinity
  8. Capabilities=CAP_IPC_LOCK+ep
  9. CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK
  10. [Install]
  11. WantedBy=multi-user.target
  1. [Unit]Description=${NAME} ${TYPE}
  2. [Service]ExecStart=/usr/local/bin/${NAME} ${TYPE} -config /etc/${NAME}-${TYPE}.hclUser=boundaryGroup=boundaryLimitMEMLOCK=infinityCapabilities=CAP_IPC_LOCK+epCapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK
  3. [Install]WantedBy=multi-user.target

Here’s a simple install script that creates the boundary group and user, installs the systemd unit file and enables it at startup:

  1. #!/bin/bash
  2. # Installs the boundary as a service for systemd on linux
  3. # Usage: ./install.sh <worker|controller>
  4. TYPE=$1
  5. NAME=boundary
  6. sudo cat << EOF > /etc/systemd/system/${NAME}-${TYPE}.service
  7. [Unit]
  8. Description=${NAME} ${TYPE}
  9. [Service]
  10. ExecStart=/usr/local/bin/${NAME} ${TYPE} -config /etc/${NAME}-${TYPE}.hcl
  11. User=boundary
  12. Group=boundary
  13. LimitMEMLOCK=infinity
  14. Capabilities=CAP_IPC_LOCK+ep
  15. CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK
  16. [Install]
  17. WantedBy=multi-user.target
  18. EOF
  19. # Add the boundary system user and group to ensure we have a no-login
  20. # user capable of owning and running Boundary
  21. sudo adduser --system --group boundary || true
  22. sudo chown boundary:boundary /etc/${NAME}-${TYPE}.hcl
  23. sudo chown boundary:boundary /usr/local/bin/boundary
  24. # Make sure to initialize the DB before starting the service. This will result in
  25. # a database already initialized warning if another controller or worker has done this
  26. # already, making it a lazy, best effort initialization
  27. if [ "${TYPE}" = "controller" ]; then
  28. sudo /usr/local/bin/boundary database init -config /etc/${NAME}-${TYPE}.hcl || true
  29. fi
  30. sudo chmod 664 /etc/systemd/system/${NAME}-${TYPE}.service
  31. sudo systemctl daemon-reload
  32. sudo systemctl enable ${NAME}-${TYPE}
  33. sudo systemctl start ${NAME}-${TYPE}
  1. #!/bin/bash# Installs the boundary as a service for systemd on linux# Usage: ./install.sh <worker|controller>
  2. TYPE=$1NAME=boundary
  3. sudo cat << EOF > /etc/systemd/system/${NAME}-${TYPE}.service[Unit]Description=${NAME} ${TYPE}
  4. [Service]ExecStart=/usr/local/bin/${NAME} ${TYPE} -config /etc/${NAME}-${TYPE}.hclUser=boundaryGroup=boundaryLimitMEMLOCK=infinityCapabilities=CAP_IPC_LOCK+epCapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK
  5. [Install]WantedBy=multi-user.targetEOF
  6. # Add the boundary system user and group to ensure we have a no-login# user capable of owning and running Boundarysudo adduser --system --group boundary || truesudo chown boundary:boundary /etc/${NAME}-${TYPE}.hclsudo chown boundary:boundary /usr/local/bin/boundary
  7. # Make sure to initialize the DB before starting the service. This will result in# a database already initialized warning if another controller or worker has done this# already, making it a lazy, best effort initializationif [ "${TYPE}" = "controller" ]; then sudo /usr/local/bin/boundary database init -config /etc/${NAME}-${TYPE}.hcl || truefi
  8. sudo chmod 664 /etc/systemd/system/${NAME}-${TYPE}.servicesudo systemctl daemon-reloadsudo systemctl enable ${NAME}-${TYPE}sudo systemctl start ${NAME}-${TYPE}

Postgres Configuration

TBD

KMS Configuration

TBD