Deploy Single Consul Datacenter Across Multiple Kubernetes Clusters

Note: When running Consul across multiple Kubernetes clusters, we recommend using admin partitions for production environments. This Consul Enterprise feature allows you to accommodate multiple tenants without resource collisions when administering a cluster at scale. Admin partitions also enable you to run Consul on Kubernetes clusters across a non-flat network.

This page describes deploying a single Consul datacenter in multiple Kubernetes clusters, with servers running in one cluster and only Consul on Kubernetes components in the rest of the clusters. This example uses two Kubernetes clusters, but this approach could be extended to using more than two.

Requirements

  • consul-k8s v1.0.x or higher, and Consul 1.14.x or higher
  • Kubernetes clusters must be able to communicate over LAN on a flat network.
  • Either the Helm release name for each Kubernetes cluster must be unique, or global.name for each Kubernetes cluster must be unique to prevent collisions of ACL resources with the same prefix.

Prepare Helm release name ahead of installs

The Helm release name must be unique for each Kubernetes cluster. The Helm chart uses the Helm release name as a prefix for the ACL resources that it creates, such as tokens and auth methods. If the names of the Helm releases are identical, or if global.name for each cluster is identical, subsequent Consul on Kubernetes clusters will overwrite existing ACL resources and cause the clusters to fail.

Before proceeding with installation, prepare the Helm release names as environment variables for both the server and client install.

  1. $ export HELM_RELEASE_SERVER=server
  2. $ export HELM_RELEASE_CONSUL=consul
  3. ...
  4. $ export HELM_RELEASE_CONSUL2=consul2

Deploying Consul servers in the first cluster

First, deploy the first cluster with Consul servers according to the following example Helm configuration.

Single Consul Data in Multiple Kubernetes Clusters - 图1

cluster1-values.yaml

  1. global:
  2. datacenter: dc1
  3. tls:
  4. enabled: true
  5. enableAutoEncrypt: true
  6. acls:
  7. manageSystemACLs: true
  8. gossipEncryption:
  9. secretName: consul-gossip-encryption-key
  10. secretKey: key
  11. server:
  12. exposeService:
  13. enabled: true
  14. type: NodePort
  15. nodePort:
  16. ## all are random nodePorts and you can set your own
  17. http: 30010
  18. https: 30011
  19. serf: 30012
  20. rpc: 30013
  21. grpc: 30014
  22. ui:
  23. service:
  24. type: NodePort

Note that this will deploy a secure configuration with gossip encryption, TLS for all components and ACLs. In addition, this will enable the Consul Service Mesh and the controller for CRDs that can be used later to verify the connectivity of services across clusters.

The UI’s service type is set to be NodePort. This is needed to connect to servers from another cluster without using the pod IPs of the servers, which are likely going to change.

Other services are exposed as NodePort services and configured with random port numbers. In this example, the grpc port is set to 30014, which enables services to discover Consul servers using gRPC when connecting from another cluster.

To deploy, first generate the Gossip encryption key and save it as a Kubernetes secret.

  1. $ kubectl create secret generic consul-gossip-encryption-key --from-literal=key=$(consul keygen)

Now install Consul cluster with Helm:

  1. $ helm install ${HELM_RELEASE_SERVER} --values cluster1-values.yaml hashicorp/consul

Once the installation finishes and all components are running and ready, the following information needs to be extracted (using the below command) and applied to the second Kubernetes cluster.

  • The CA certificate generated during installation
  • The ACL bootstrap token generated during installation
  1. $ kubectl get secret ${HELM_RELEASE_SERVER}-consul-ca-cert ${HELM_RELEASE_SERVER}-consul-bootstrap-acl-token --output yaml > cluster1-credentials.yaml

Deploying Consul Kubernetes in the second cluster

Note: If multiple Kubernetes clusters will be joined to the Consul Datacenter, then the following instructions will need to be repeated for each additional Kubernetes cluster.

Switch to the second Kubernetes cluster where Consul clients will be deployed that will join the first Consul cluster.

  1. $ kubectl config use-context <K8S_CONTEXT_NAME>

First, apply the credentials extracted from the first cluster to the second cluster:

  1. $ kubectl apply --filename cluster1-credentials.yaml

To deploy in the second cluster, the following example Helm configuration will be used:

Single Consul Data in Multiple Kubernetes Clusters - 图2

cluster2-values.yaml

  1. global:
  2. enabled: false
  3. datacenter: dc1
  4. acls:
  5. manageSystemACLs: true
  6. bootstrapToken:
  7. secretName: cluster1-consul-bootstrap-acl-token
  8. secretKey: token
  9. tls:
  10. enabled: true
  11. caCert:
  12. secretName: cluster1-consul-ca-cert
  13. secretKey: tls.crt
  14. externalServers:
  15. enabled: true
  16. # This should be any node IP of the first k8s cluster or the load balancer IP if using LoadBalancer service type for the UI.
  17. hosts: ["10.0.0.4"]
  18. # The node port of the UI's NodePort service or the load balancer port.
  19. httpsPort: 31557
  20. # Matches the gRPC port of the Consul servers in the first cluster.
  21. grpcPort: 30014
  22. tlsServerName: server.dc1.consul
  23. # The address of the kube API server of this Kubernetes cluster
  24. k8sAuthMethodHost: https://kubernetes.example.com:443
  25. connectInject:
  26. enabled: true

Note the references to the secrets extracted and applied from the first cluster in ACL and TLS configuration.

The externalServers.hosts and externalServers.httpsPort refer to the IP and port of the UI’s NodePort service deployed in the first cluster. Set the externalServers.hosts to any Node IP of the first cluster, which can be seen by running kubectl get nodes --output wide. Set externalServers.httpsPort to the nodePort of the cluster1-consul-ui service. In our example, the port is 31557.

  1. $ kubectl get service cluster1-consul-ui --context cluster1
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. cluster1-consul-ui NodePort 10.0.240.80 <none> 443:31557/TCP 40h

The grpcPort: 30014 configuration refers to the gRPC port number specified in the NodePort configuration in the first cluster.

Set the externalServer.tlsServerName to server.dc1.consul. This the DNS SAN (Subject Alternative Name) that is present in the Consul server’s certificate. This is required because the connection to the Consul servers uses the node IP, but that IP isn’t present in the server’s certificate. To make sure that the hostname verification succeeds during the TLS handshake, set the TLS server name to a DNS name that is present in the certificate.

Next, set externalServers.k8sAuthMethodHost to the address of the second Kubernetes API server. This should be the address that is reachable from the first cluster, so it cannot be the internal DNS available in each Kubernetes cluster. Consul needs it so that consul login with the Kubernetes auth method will work from the second cluster. More specifically, the Consul server will need to perform the verification of the Kubernetes service account whenever consul login is called, and to verify service accounts from the second cluster, it needs to reach the Kubernetes API in that cluster. The easiest way to get it is from the kubeconfig by running kubectl config view and grabbing the value of cluster.server for the second cluster.

Now, proceed with the installation of the second cluster.

  1. $ helm install ${HELM_RELEASE_CONSUL} --values cluster2-values.yaml hashicorp/consul

Verifying the Consul Service Mesh works

When Transparent proxy is enabled, services in one Kubernetes cluster that need to communicate with a service in another Kubernetes cluster must have an explicit upstream configured through the “consul.hashicorp.com/connect-service-upstreams” annotation.

Now that the Consul cluster spanning across multiple k8s clusters is up and running, deploy two services in separate k8s clusters and verify that they can connect to each other.

First, deploy static-server service in the first cluster:

Single Consul Data in Multiple Kubernetes Clusters - 图3

static-server.yaml

  1. ---
  2. apiVersion: consul.hashicorp.com/v1alpha1
  3. kind: ServiceIntentions
  4. metadata:
  5. name: static-server
  6. spec:
  7. destination:
  8. name: static-server
  9. sources:
  10. - name: static-client
  11. action: allow
  12. ---
  13. apiVersion: v1
  14. kind: Service
  15. metadata:
  16. name: static-server
  17. spec:
  18. type: ClusterIP
  19. selector:
  20. app: static-server
  21. ports:
  22. - protocol: TCP
  23. port: 80
  24. targetPort: 8080
  25. ---
  26. apiVersion: v1
  27. kind: ServiceAccount
  28. metadata:
  29. name: static-server
  30. ---
  31. apiVersion: apps/v1
  32. kind: Deployment
  33. metadata:
  34. name: static-server
  35. spec:
  36. replicas: 1
  37. selector:
  38. matchLabels:
  39. app: static-server
  40. template:
  41. metadata:
  42. name: static-server
  43. labels:
  44. app: static-server
  45. annotations:
  46. "consul.hashicorp.com/connect-inject": "true"
  47. spec:
  48. containers:
  49. - name: static-server
  50. image: hashicorp/http-echo:latest
  51. args:
  52. - -text="hello world"
  53. - -listen=:8080
  54. ports:
  55. - containerPort: 8080
  56. name: http
  57. serviceAccountName: static-server

Note that defining a Service intention is required so that our services are allowed to talk to each other.

Next, deploy static-client in the second cluster with the following configuration:

Single Consul Data in Multiple Kubernetes Clusters - 图4

static-client.yaml

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: static-client
  5. spec:
  6. selector:
  7. app: static-client
  8. ports:
  9. - port: 80
  10. ---
  11. apiVersion: v1
  12. kind: ServiceAccount
  13. metadata:
  14. name: static-client
  15. ---
  16. apiVersion: apps/v1
  17. kind: Deployment
  18. metadata:
  19. name: static-client
  20. spec:
  21. replicas: 1
  22. selector:
  23. matchLabels:
  24. app: static-client
  25. template:
  26. metadata:
  27. name: static-client
  28. labels:
  29. app: static-client
  30. annotations:
  31. "consul.hashicorp.com/connect-inject": "true"
  32. "consul.hashicorp.com/connect-service-upstreams": "static-server:1234"
  33. spec:
  34. containers:
  35. - name: static-client
  36. image: curlimages/curl:latest
  37. command: [ "/bin/sh", "-c", "--" ]
  38. args: [ "while true; do sleep 30; done;" ]
  39. serviceAccountName: static-client

Once both services are up and running, try connecting to the static-server from static-client:

  1. $ kubectl exec deploy/static-client -- curl --silent localhost:1234
  2. "hello world"

A successful installation would return hello world for the above curl command output.