Consul Enterprise Admin Partitions

Enterprise

This feature requires version 1.11.0+ of HashiCorp Cloud Platform (HCP) or self-managed Consul Enterprise. Refer to the enterprise feature matrix for additional information.

This topic provides and overview of admin partitions, which are entities that define one or more administrative boundaries for single Consul deployments.

Introduction

Admin partitions exist a level above namespaces in the identity hierarchy. They contain one or more namespaces and allow multiple independent tenants to share a Consul server cluster. As a result, admin partitions enable you to define administrative and communication boundaries between services managed by separate teams or belonging to separate stakeholders. They can also segment production and non-production services within the Consul deployment.

Preexisting resource nodes and namespaces: Admin partitions were introduced in Consul 1.11. Resource nodes were not namespaced prior to 1.11. After upgrading to Consul 1.11 or later, all resource nodes will be namespaced.

There are Learn tutorials available to help you get started with admin partitions.

Default Admin Partition

Each Consul cluster will have a default admin partition named default. The default partition must contain the Consul servers. The default admin partition is different from other partitions that may be created because the namespaces and resources in this partition are replicated between datacenters when they are federated.

Any resource created without specifying an admin partition will inherit the partition of the ACL token used to create the resource.

Preexisting resources and the default partition: Admin partitions were introduced in Consul 1.11. After upgrading to Consul 1.11 or later, the default partition will contain all resources created in previous versions.

Naming Admin Partitions

Only characters that are valid in DNS names can be used to name admin partitions. Names must also begin with a lowercase letter.

Namespaces

When an admin partition is created, it will include the default namespace. You can create additional namespaces within the partition. Resources created within a namespace are not shared across partitions.

Cross-datacenter Replication

Only resources in the default admin partition will be replicated to secondary datacenters (also see Known Limitations).

DNS Queries

Client agents will be configured to operate within a specific admin partition. The DNS interface will only return results for the admin partition within the scope of the client.

Service Mesh Configurations

The partition in which proxy-defaults and mesh configurations are created define the scope of the configurations. Services registered in a partition will use the proxy-defaults and mesh configurations that have been created in the partition.

Cross-partition Networking

You can configure services to be discoverable by downstream services in any partition within the datacenter. Specify the upstream services that you want to be available for discovery by configuring the exported-services configuration entry in the partition where the services are registered. Refer to the exported-services documentation for details. Additionally, the upstreams configuration for proxies in the source partition must specify the name of the destination partition so that listeners can be created. Refer to the Upstream Configuration Reference for additional information.

Requirements

Your Consul configuration must meet the following requirements to use admin partitions.

Versions

  • Consul 1.11.1 and newer

General Networking Requirements

All Consul clients must be able to initiate Gossip, HTTPS, and RPC connections to the servers. All servers must also be able to initiate Gossip connections to the clients.

For Consul on Kubernetes, a dedicated partition Kubernetes LoadBalancer service is deployed to allow communication from clients to servers for admin partitions support (refer to Kubernetes Requirements for additional information).

For other runtimes, refer to the documentation for your infrastructure environment for instructions on how to allow communication on the following ports:

  • 8300 (RPC)
  • 8301 (Gossip)
  • 443 (HTTPS API requests)

Security Configurations

  • The agent token used by the client agent must allow node:write in the admin partition.
  • The write permission for proxy-defaults requires mesh:write. See Admin Partition Rules for additional information.
  • The write permissions for ingress and terminating gateways require mesh:write privileges.
  • Wildcards (*) are not supported for the partition field when creating intentions for admin partitions. The partition name must be explicitly specified.
  • With the exception of the default admin partition, ACL rules configured for admin partitions are isolated, so policies defined in partitions outside of the default partition can only reference their local partition.

Agent Configurations

  • The admin partition name should be specified in client agent configurations:

    1. partition = "<NAME>"
    1. partition = "<NAME>"
  • The anti-entropy sync will use the configured admin partition name when registering the node.

Kubernetes Requirements

One of the primary use cases for admin partitions is for enabling a service mesh across multiple Kubernetes clusters. The following requirements must be met to create admin partitions on Kubernetes:

  • If you are deploying Consul servers on Kubernetes, then ensure that the Consul servers are deployed within the same Kubernetes cluster. Consul servers may be deployed external to Kubernetes and configured using the externalServers stanza.
  • Consul clients deployed on the same Kubernetes cluster as the Consul Servers must use the default partition. If the clients are required to run on a non-default partition, then the clients must be deployed in a separate Kubernetes cluster.
  • A Consul Enterprise license must be installed on each Kubernetes cluster.
  • The helm chart for consul-k8s v0.39.0 or greater.
  • Consul 1.11.1-ent or greater.
  • A designated Kubernetes LoadBalancer service must be exposed on the Consul server cluster. This enable the following communication channels to the Consul servers:
    • RPC on port 8300
    • Gossip on port 8301
    • HTTPS API requests on port 443 API requests
  • Mesh gateways must be deployed as a Kubernetes LoadBalancer service on port 443 across all Kubernetes clusters.
  • Cross-partition networking must be implemented as described in Cross-Partition Networking.

Usage

This section describes how to deploy Consul admin partitions to Kubernetes clusters. Refer to the admin partition CLI documentation for information about command line usage.

Deploying Consul with Admin Partitions on Kubernetes

The expected use case is to create admin partitions on Kubernetes clusters. This is because many organizations prefer to use cloud-managed Kubernetes offerings to provision separate Kubernetes clusters for individual teams, business units, or environments. This is opposed to deploying a single, large Kubernetes cluster. Organizations encounter problems, however, when they attempt to use a service mesh to enable multi-cluster use cases, such as administration tasks and communication between nodes.

The following procedure will result in an admin partition in each Kubernetes cluster. The Consul clients running in the cluster with servers will be in the default partition. Another partition called clients will also be created.

Prepare to install Consul across multiple Kubernetes clusters

Verify that your Consul deployment meets the Kubernetes Requirements before proceeding.

  1. Verify that your VPC is configured to enable connectivity between the pods running Consul clients and servers. Refer to your virtual cloud provider’s documentation for instructions on configuring network connectivity.

  2. Set environment variables to use with shell commands.

    1. $ export HELM_RELEASE_SERVER=server
    2. $ export HELM_RELEASE_CLIENT=client
    3. $ export SERVER_CONTEXT=<context for server, run `kubectl config current-context` for cluster provisioned for servers>
    4. $ export CLIENT_CONTEXT=<context for workload partition, run `kubectl config current-context` for cluster provisioned for workload partition>
    1. $ export HELM_RELEASE_SERVER=server
    2. $ export HELM_RELEASE_CLIENT=client
    3. $ export SERVER_CONTEXT=<context for server, run `kubectl config current-context` for cluster provisioned for servers>
    4. $ export CLIENT_CONTEXT=<context for workload partition, run `kubectl config current-context` for cluster provisioned for workload partition>
  3. Create the license secret in server cluster.

    1. $ kubectl create --context ${SERVER_CONTEXT} ns consul
    2. $ kubectl create secret --context ${SERVER_CONTEXT} --namespace consul generic license --from-file=key=./path/to/license.hclic
    1. $ kubectl create --context ${SERVER_CONTEXT} ns consul
    2. $ kubectl create secret --context ${SERVER_CONTEXT} --namespace consul generic license --from-file=key=./path/to/license.hclic
  4. Create the license secret in the workload client cluster. This step must be repeated for every additional workload client cluster.

    1. $ kubectl create --context ${CLIENT_CONTEXT} ns consul
    2. $ kubectl create secret --context ${CLIENT_CONTEXT} --namespace consul generic license --from-file=key=./path/to/license.hclic
    1. $ kubectl create --context ${CLIENT_CONTEXT} ns consul
    2. $ kubectl create secret --context ${CLIENT_CONTEXT} --namespace consul generic license --from-file=key=./path/to/license.hclic

Install the Consul server cluster

  1. Set your context to the server cluster.

    1. $ kubectl config use-context ${SERVER_CONTEXT}
    1. $ kubectl config use-context ${SERVER_CONTEXT}
  2. Create a server configuration values file to override the default Consul Helm chart settings:

    server.yaml

    server.yaml

    YAML

    Admin Partitions - 图1

    • YAML
    1. global:
    2. enableConsulNamespaces: true
    3. tls:
    4. enabled: true
    5. image: hashicorp/consul-enterprise:1.12.0-ent
    6. adminPartitions:
    7. enabled: true
    8. acls:
    9. managedSystemACLs: true
    10. enterpriseLicense:
    11. secretName: license
    12. secretKey: key
    13. server:
    14. exposeGossipAndRPCPorts: true
    15. connectInject:
    16. enabled: true
    17. consulNamespaces:
    18. mirroringK8S: true
    19. controller:
    20. enabled: true
    21. meshGateway:
    22. enabled: true
    23. replicas: 1
    24. dns:
    25. enabled: true
    26. enableRedirection: true
    1. 1 2 3 4 5 6 7 8 9 1011121314151617181920212223242526global:
    2. enableConsulNamespaces: true
    3. tls:
    4. enabled: true
    5. image: hashicorp/consul-enterprise:1.12.0-ent
    6. adminPartitions:
    7. enabled: true
    8. acls:
    9. managedSystemACLs: true
    10. enterpriseLicense:
    11. secretName: license
    12. secretKey: key
    13. server:
    14. exposeGossipAndRPCPorts: true
    15. connectInject:
    16. enabled: true
    17. consulNamespaces:
    18. mirroringK8S: true
    19. controller:
    20. enabled: true
    21. meshGateway:
    22. enabled: true
    23. replicas: 1
    24. dns:
    25. enabled: true
    26. enableRedirection: true

    Refer to the Helm Chart Configuration reference for details about the parameters you can specify in the file.

  3. Install the Consul server(s) using the values file created in the previous step:

    1. $ helm install ${HELM_RELEASE_SERVER} hashicorp/consul --version "0.43.0" --create-namespace --namespace consul --values server.yaml
    1. $ helm install ${HELM_RELEASE_SERVER} hashicorp/consul --version "0.43.0" --create-namespace --namespace consul --values server.yaml
  4. After the server starts, get the external IP address for partition service so that it can be added to the client configuration. The IP address is used to bootstrap connectivity between servers and clients.

    1. $ kubectl get services --selector="app=consul,component=server" --namespace consul --output jsonpath="{range .items[*]}{@.status.loadBalancer.ingress[*].ip}{end}"
    2. 34.135.103.67
    1. $ kubectl get services --selector="app=consul,component=server" --namespace consul --output jsonpath="{range .items[*]}{@.status.loadBalancer.ingress[*].ip}{end}"
    2. 34.135.103.67
  5. Get the Kubernetes authentication method URL for the workload cluster:

    1. $ kubectl config view --output "jsonpath={.clusters[?(@.name=='${CLIENT_CONTEXT}')].cluster.server}"
    1. $ kubectl config view --output "jsonpath={.clusters[?(@.name=='${CLIENT_CONTEXT}')].cluster.server}"

    Use the IP address printed to the console to configure the k8sAuthMethodHost parameter in the workload configuration file for your client nodes.

  6. Copy the server certificate to the workload cluster.

    1. $ kubectl get secret ${HELM_RELEASE_SERVER}-consul-ca-cert --context ${SERVER_CONTEXT} -n consul --output yaml | kubectl apply --namespace consul --context ${CLIENT_CONTEXT} --filename -
    1. $ kubectl get secret ${HELM_RELEASE_SERVER}-consul-ca-cert --context ${SERVER_CONTEXT} -n consul --output yaml | kubectl apply --namespace consul --context ${CLIENT_CONTEXT} --filename -
  7. Copy the server key to the workload cluster.

    1. $ kubectl get secret ${HELM_RELEASE_SERVER}-consul-ca-key --context ${SERVER_CONTEXT} --namespace consul --output yaml | kubectl apply --namespace consul --context ${CLIENT_CONTEXT} --filename -
    1. $ kubectl get secret ${HELM_RELEASE_SERVER}-consul-ca-key --context ${SERVER_CONTEXT} --namespace consul --output yaml | kubectl apply --namespace consul --context ${CLIENT_CONTEXT} --filename -
  8. If ACLs were enabled in the server configuration values file, copy the token to the workload cluster.

    1. $ kubectl get secret ${HELM_RELEASE_SERVER}-consul-partitions-acl-token --context ${SERVER_CONTEXT} --namespace consul --output yaml | kubectl apply --namespace consul --context ${CLIENT_CONTEXT} --filename -
    1. $ kubectl get secret ${HELM_RELEASE_SERVER}-consul-partitions-acl-token --context ${SERVER_CONTEXT} --namespace consul --output yaml | kubectl apply --namespace consul --context ${CLIENT_CONTEXT} --filename -

Install the workload client cluster

  1. Switch to the workload client clusters:

    1. $ kubectl config use-context ${CLIENT_CONTEXT}
    1. $ kubectl config use-context ${CLIENT_CONTEXT}
  2. Create the workload configuration for client nodes in your cluster. Create a configuration for each admin partition. In the following example, the external IP address and the Kubernetes authentication method IP address from the previous steps have been applied. Also, ensure a unique global name is assigned.

    client.yaml

    client.yaml

    YAML

    Admin Partitions - 图2

    • YAML
    1. global:
    2. name: client
    3. enabled: false
    4. enableConsulNamespaces: true
    5. image: hashicorp/consul-enterprise:1.12.0-ent
    6. adminPartitions:
    7. enabled: true
    8. name: clients
    9. tls:
    10. enabled: true
    11. caCert:
    12. secretName: server-consul-ca-cert # See step 6 from `Install Consul server cluster`
    13. secretKey: tls.crt
    14. caKey:
    15. secretName: server-consul-ca-key # See step 7 from `Install Consul server cluster`
    16. secretKey: tls.key
    17. acls:
    18. manageSystemACLs: true
    19. bootstrapToken:
    20. secretName: server-consul-partitions-acl-token # See step 8 from `Install Consul server cluster`
    21. secretKey: token
    22. enterpriseLicense:
    23. secretName: license
    24. secretKey: key
    25. externalServers:
    26. enabled: true
    27. hosts: [34.135.103.67] # See step 4 from `Install Consul server cluster`
    28. tlsServerName: server.dc1.consul
    29. k8sAuthMethodHost: https://104.154.156.146 # See step 5 from `Install Consul server cluster`
    30. client:
    31. enabled: true
    32. exposeGossipPorts: true
    33. join: [34.135.103.67] # See step 4 from `Install Consul server cluster`
    34. connectInject:
    35. enabled: true
    36. consulNamespaces:
    37. mirroringK8S: true
    38. controller:
    39. enabled: true
    40. meshGateway:
    41. enabled: true
    42. replicas: 1
    43. dns:
    44. enabled: true
    45. enableRedirection: true
    1. 1 2 3 4 5 6 7 8 9 101112131415161718192021222324252627282930313233343536373839404142434445global:
    2. name: client
    3. enabled: false
    4. enableConsulNamespaces: true
    5. image: hashicorp/consul-enterprise:1.12.0-ent
    6. adminPartitions:
    7. enabled: true
    8. name: clients
    9. tls:
    10. enabled: true
    11. caCert:
    12. secretName: server-consul-ca-cert # See step 6 from `Install Consul server cluster`
    13. secretKey: tls.crt
    14. caKey:
    15. secretName: server-consul-ca-key # See step 7 from `Install Consul server cluster`
    16. secretKey: tls.key
    17. acls:
    18. manageSystemACLs: true
    19. bootstrapToken:
    20. secretName: server-consul-partitions-acl-token # See step 8 from `Install Consul server cluster`
    21. secretKey: token
    22. enterpriseLicense:
    23. secretName: license
    24. secretKey: key
    25. externalServers:
    26. enabled: true
    27. hosts: [34.135.103.67] # See step 4 from `Install Consul server cluster`
    28. tlsServerName: server.dc1.consul
    29. k8sAuthMethodHost: https://104.154.156.146 # See step 5 from `Install Consul server cluster`
    30. client:
    31. enabled: true
    32. exposeGossipPorts: true
    33. join: [34.135.103.67] # See step 4 from `Install Consul server cluster`
    34. connectInject:
    35. enabled: true
    36. consulNamespaces:
    37. mirroringK8S: true
    38. controller:
    39. enabled: true
    40. meshGateway:
    41. enabled: true
    42. replicas: 1
    43. dns:
    44. enabled: true
    45. enableRedirection: true
  3. Install the workload client clusters:

    1. $ helm install ${HELM_RELEASE_CLIENT} hashicorp/consul --version "0.43.0" --create-namespace --namespace consul --values client.yaml
    1. $ helm install ${HELM_RELEASE_CLIENT} hashicorp/consul --version "0.43.0" --create-namespace --namespace consul --values client.yaml

Verifying the Deployment

You can log into the Consul UI to verify that the partitions appear as expected.

  1. Set your context to the server cluster.

    1. $ kubectl config use-context ${SERVER_CONTEXT}
    1. $ kubectl config use-context ${SERVER_CONTEXT}
  2. If ACLs are enabled, you will need the partitions ACL token, which can be read from the Kubernetes secret. The token is an encoded string that must be decoded in base64, e.g.:

    1. $ kubectl get secret --namespace consul ${HELM_RELEASE_SERVER}-consul-bootstrap-acl-token --template "{{ .data.token | base64decode }}"
    1. $ kubectl get secret --namespace consul ${HELM_RELEASE_SERVER}-consul-bootstrap-acl-token --template "{{ .data.token | base64decode }}"

    The example command gets the token using the secret name configured in the values file (bootstrap.secretName), decodes the secret, and prints the usable token to the console in JSON format.

  3. Open the Consul UI in a browser using the external IP address and port number described in a previous step (see step 5).

  4. Click Log in and enter the decoded token when prompted.

You will see the default and clients partitions available in the Admin Partition drop-down menu.

Partitions will appear in the Admin Partitions drop-down menu within the Consul UI.

Known Limitations

  • Only the default admin partition is supported when federating multiple Consul datacenters in a WAN.
  • Admin partitions have no theoretical limit. We intend to conduct a large-scale test to identify a recommended max in the future.