Establish cluster peering connections on Kubernetes

This page details the process for establishing a cluster peering connection between services in a Consul on Kubernetes deployment.

The overall process for establishing a cluster peering connection consists of the following steps:

  1. Create a peering token in one cluster.
  2. Use the peering token to establish peering with a second cluster.
  3. Export services between clusters.
  4. Create intentions to authorize services for peers.

Cluster peering between services cannot be established until all four steps are complete.

Cluster peering between services cannot be established until all four steps are complete. If you want to establish cluster peering connections and create sameness groups at the same time, refer to the guidance in create sameness groups.

For general guidance for establishing cluster peering connections, refer to Establish cluster peering connections.

Prerequisites

You must meet the following requirements to use Consul’s cluster peering features with Kubernetes:

  • Consul v1.14.1 or higher
  • Consul on Kubernetes v1.0.0 or higher
  • At least two Kubernetes clusters

In Consul on Kubernetes, peers identify each other using the metadata.name values you establish when creating the PeeringAcceptor and PeeringDialer CRDs. For additional information about requirements for cluster peering on Kubernetes deployments, refer to Cluster peering on Kubernetes technical specifications.

Assign cluster IDs to environmental variables

After you provision a Kubernetes cluster and set up your kubeconfig file to manage access to multiple Kubernetes clusters, you can assign your clusters to environmental variables for future use.

  1. Get the context names for your Kubernetes clusters using one of these methods:

    • Run the kubectl config current-context command to get the context for the cluster you are currently in.
    • Run the kubectl config get-contexts command to get all configured contexts in your kubeconfig file.
  2. Use the kubectl command to export the Kubernetes context names and then set them to variables. For more information on how to use kubeconfig and contexts, refer to the Kubernetes docs on configuring access to multiple clusters.

    1. $ export CLUSTER1_CONTEXT=<CONTEXT for first Kubernetes cluster>
    2. $ export CLUSTER2_CONTEXT=<CONTEXT for second Kubernetes cluster>

Install Consul using Helm and configure peering over mesh gateways

To use cluster peering with Consul on Kubernetes deployments, update the Helm chart with the required values. After updating the Helm chart, you can use the consul-k8s CLI to apply values.yaml to each cluster.

  1. In cluster-01, run the following commands:

    1. $ export HELM_RELEASE_NAME1=cluster-01
    1. $ helm install ${HELM_RELEASE_NAME1} hashicorp/consul --create-namespace --namespace consul --version "1.2.0" --values values.yaml --set global.datacenter=dc1 --kube-context $CLUSTER1_CONTEXT
  2. In cluster-02, run the following commands:

    1. $ export HELM_RELEASE_NAME2=cluster-02
    1. $ helm install ${HELM_RELEASE_NAME2} hashicorp/consul --create-namespace --namespace consul --version "1.2.0" --values values.yaml --set global.datacenter=dc2 --kube-context $CLUSTER2_CONTEXT
  3. For both clusters apply the Mesh configuration entry values provided in Mesh Gateway Specifications to allow establishing peering connections over mesh gateways.

Configure the mesh gateway mode for traffic between services

In Kubernetes deployments, you can configure mesh gateways to use local mode so that a service dialing a service in a remote peer dials the remote mesh gateway instead of the local mesh gateway. To configure the mesh gateway mode so that this traffic always leaves through the local mesh gateway, you can use the ProxyDefaults CRD.

  1. In cluster-01 apply the following ProxyDefaults CRD to configure the mesh gateway mode.

    Establish Cluster Peering Connections - 图1

    proxy-defaults.yaml

    1. apiVersion: consul.hashicorp.com/v1alpha1
    2. kind: ProxyDefaults
    3. metadata:
    4. name: global
    5. spec:
    6. meshGateway:
    7. mode: local
    1. $ kubectl --context $CLUSTER1_CONTEXT apply -f proxy-defaults.yaml
  2. In cluster-02 apply the following ProxyDefaults CRD to configure the mesh gateway mode.

    Establish Cluster Peering Connections - 图2

    proxy-defaults.yaml

    1. apiVersion: consul.hashicorp.com/v1alpha1
    2. kind: ProxyDefaults
    3. metadata:
    4. name: global
    5. spec:
    6. meshGateway:
    7. mode: local
    1. $ kubectl --context $CLUSTER2_CONTEXT apply -f proxy-defaults.yaml

Create a peering token

To begin the cluster peering process, generate a peering token in one of your clusters. The other cluster uses this token to establish the peering connection.

Every time you generate a peering token, a single-use secret for establishing the secret is embedded in the token. Because regenerating a peering token invalidates the previously generated secret, you must use the most recently created token to establish peering connections.

  1. In cluster-01, create the PeeringAcceptor custom resource. To ensure cluster peering connections are secure, the metadata.name field cannot be duplicated. Refer to the peer by a specific name.

    Establish Cluster Peering Connections - 图3

    acceptor.yaml

    1. apiVersion: consul.hashicorp.com/v1alpha1
    2. kind: PeeringAcceptor
    3. metadata:
    4. name: cluster-02 ## The name of the peer you want to connect to
    5. spec:
    6. peer:
    7. secret:
    8. name: "peering-token"
    9. key: "data"
    10. backend: "kubernetes"
  2. Apply the PeeringAcceptor resource to the first cluster.

    1. $ kubectl --context $CLUSTER1_CONTEXT apply --filename acceptor.yaml
  3. Save your peering token so that you can export it to the other cluster.

    1. $ kubectl --context $CLUSTER1_CONTEXT get secret peering-token --output yaml > peering-token.yaml

Establish a connection between clusters

Next, use the peering token to establish a secure connection between the clusters.

  1. Apply the peering token to the second cluster.

    1. $ kubectl --context $CLUSTER2_CONTEXT apply --filename peering-token.yaml
  2. In cluster-02, create the PeeringDialer custom resource. To ensure cluster peering connections are secure, the metadata.name field cannot be duplicated. Refer to the peer by a specific name.

    Establish Cluster Peering Connections - 图4

    dialer.yaml

    1. apiVersion: consul.hashicorp.com/v1alpha1
    2. kind: PeeringDialer
    3. metadata:
    4. name: cluster-01 ## The name of the peer you want to connect to
    5. spec:
    6. peer:
    7. secret:
    8. name: "peering-token"
    9. key: "data"
    10. backend: "kubernetes"
  3. Apply the PeeringDialer resource to the second cluster.

    1. $ kubectl --context $CLUSTER2_CONTEXT apply --filename dialer.yaml

Export services between clusters

After you establish a connection between the clusters, you need to create an exported-services CRD that defines the services that are available to another admin partition.

While the CRD can target admin partitions either locally or remotely, clusters peering always exports services to remote admin partitions. Refer to exported service consumers for more information.

  1. For the service in cluster-02 that you want to export, add the "consul.hashicorp.com/connect-inject": "true" annotation to your service’s pods prior to deploying. The annotation allows the workload to join the mesh. It is highlighted in the following example:

    Establish Cluster Peering Connections - 图5

    backend.yaml

    1. # Service to expose backend
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. name: backend
    6. spec:
    7. selector:
    8. app: backend
    9. ports:
    10. - name: http
    11. protocol: TCP
    12. port: 80
    13. targetPort: 9090
    14. ---
    15. apiVersion: v1
    16. kind: ServiceAccount
    17. metadata:
    18. name: backend
    19. ---
    20. # Deployment for backend
    21. apiVersion: apps/v1
    22. kind: Deployment
    23. metadata:
    24. name: backend
    25. labels:
    26. app: backend
    27. spec:
    28. replicas: 1
    29. selector:
    30. matchLabels:
    31. app: backend
    32. template:
    33. metadata:
    34. labels:
    35. app: backend
    36. annotations:
    37. "consul.hashicorp.com/connect-inject": "true"
    38. spec:
    39. serviceAccountName: backend
    40. containers:
    41. - name: backend
    42. image: nicholasjackson/fake-service:v0.22.4
    43. ports:
    44. - containerPort: 9090
    45. env:
    46. - name: "LISTEN_ADDR"
    47. value: "0.0.0.0:9090"
    48. - name: "NAME"
    49. value: "backend"
    50. - name: "MESSAGE"
    51. value: "Response from backend"
  2. Deploy the backend service to the second cluster.

    1. $ kubectl --context $CLUSTER2_CONTEXT apply --filename backend.yaml
  3. In cluster-02, create an ExportedServices custom resource. The name of the peer that consumes the service should be identical to the name set in the PeeringDialer CRD.

    Establish Cluster Peering Connections - 图6

    exported-service.yaml

    1. apiVersion: consul.hashicorp.com/v1alpha1
    2. kind: ExportedServices
    3. metadata:
    4. name: default ## The name of the partition containing the service
    5. spec:
    6. services:
    7. - name: backend ## The name of the service you want to export
    8. consumers:
    9. - peer: cluster-01 ## The name of the peer that receives the service
  4. Apply the ExportedServices resource to the second cluster.

    1. $ kubectl --context $CLUSTER2_CONTEXT apply --filename exported-service.yaml

Authorize services for peers

Before you can call services from peered clusters, you must set service intentions that authorize those clusters to use specific services. Consul prevents services from being exported to unauthorized clusters.

  1. Create service intentions for the second cluster. The name of the peer should match the name set in the PeeringDialer CRD.

    Establish Cluster Peering Connections - 图7

    intention.yaml

    1. apiVersion: consul.hashicorp.com/v1alpha1
    2. kind: ServiceIntentions
    3. metadata:
    4. name: backend-deny
    5. spec:
    6. destination:
    7. name: backend
    8. sources:
    9. - name: "*"
    10. action: deny
    11. - name: frontend
    12. action: allow
    13. peer: cluster-01 ## The peer of the source service
  2. Apply the intentions to the second cluster.

    1. $ kubectl --context $CLUSTER2_CONTEXT apply --filename intention.yaml
  3. Add the "consul.hashicorp.com/connect-inject": "true" annotation to your service’s pods before deploying the workload so that the services in cluster-01 can dial backend in cluster-02. To dial the upstream service from an application, configure the application so that that requests are sent to the correct DNS name as specified in Service Virtual IP Lookups. In the following example, the annotation that allows the workload to join the mesh and the configuration provided to the workload that enables the workload to dial the upstream service using the correct DNS name is highlighted. Service Virtual IP Lookups for Consul Enterprise details how you would similarly format a DNS name including partitions and namespaces.

    Establish Cluster Peering Connections - 图8

    frontend.yaml

    1. # Service to expose frontend
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. name: frontend
    6. spec:
    7. selector:
    8. app: frontend
    9. ports:
    10. - name: http
    11. protocol: TCP
    12. port: 9090
    13. targetPort: 9090
    14. ---
    15. apiVersion: v1
    16. kind: ServiceAccount
    17. metadata:
    18. name: frontend
    19. ---
    20. apiVersion: apps/v1
    21. kind: Deployment
    22. metadata:
    23. name: frontend
    24. labels:
    25. app: frontend
    26. spec:
    27. replicas: 1
    28. selector:
    29. matchLabels:
    30. app: frontend
    31. template:
    32. metadata:
    33. labels:
    34. app: frontend
    35. annotations:
    36. "consul.hashicorp.com/connect-inject": "true"
    37. spec:
    38. serviceAccountName: frontend
    39. containers:
    40. - name: frontend
    41. image: nicholasjackson/fake-service:v0.22.4
    42. securityContext:
    43. capabilities:
    44. add: ["NET_ADMIN"]
    45. ports:
    46. - containerPort: 9090
    47. env:
    48. - name: "LISTEN_ADDR"
    49. value: "0.0.0.0:9090"
    50. - name: "UPSTREAM_URIS"
    51. value: "http://backend.virtual.cluster-02.consul"
    52. - name: "NAME"
    53. value: "frontend"
    54. - name: "MESSAGE"
    55. value: "Hello World"
    56. - name: "HTTP_CLIENT_KEEP_ALIVES"
    57. value: "false"
  4. Apply the service file to the first cluster.

    1. $ kubectl --context $CLUSTER1_CONTEXT apply --filename frontend.yaml
  5. Run the following command in frontend and then check the output to confirm that you peered your clusters successfully.

    1. $ kubectl --context $CLUSTER1_CONTEXT exec -it $(kubectl --context $CLUSTER1_CONTEXT get pod -l app=frontend -o name) -- curl localhost:9090
    1. {
    2. "name": "frontend",
    3. "uri": "/",
    4. "type": "HTTP",
    5. "ip_addresses": [
    6. "10.16.2.11"
    7. ],
    8. "start_time": "2022-08-26T23:40:01.167199",
    9. "end_time": "2022-08-26T23:40:01.226951",
    10. "duration": "59.752279ms",
    11. "body": "Hello World",
    12. "upstream_calls": {
    13. "http://backend.virtual.cluster-02.consul": {
    14. "name": "backend",
    15. "uri": "http://backend.virtual.cluster-02.consul",
    16. "type": "HTTP",
    17. "ip_addresses": [
    18. "10.32.2.10"
    19. ],
    20. "start_time": "2022-08-26T23:40:01.223503",
    21. "end_time": "2022-08-26T23:40:01.224653",
    22. "duration": "1.149666ms",
    23. "headers": {
    24. "Content-Length": "266",
    25. "Content-Type": "text/plain; charset=utf-8",
    26. "Date": "Fri, 26 Aug 2022 23:40:01 GMT"
    27. },
    28. "body": "Response from backend",
    29. "code": 200
    30. }
    31. },
    32. "code": 200
    33. }

Authorize service reads with ACLs

If ACLs are enabled on a Consul cluster, sidecar proxies that access exported services as an upstream must have an ACL token that grants read access.

Read access to all imported services is granted using either of the following rules associated with an ACL token:

  • service:write permissions for any service in the sidecar’s partition.
  • service:read and node:read for all services and nodes, respectively, in sidecar’s namespace and partition.

For Consul Enterprise, the permissions apply to all imported services in the service’s partition. These permissions are satisfied when using a service identity.

Refer to Reading servers in the exported-services configuration entry documentation for example rules.

For additional information about how to configure and use ACLs, refer to ACLs system overview.