Cluster peering on Kubernetes technical specifications

This reference topic describes the technical specifications associated with using cluster peering in your Kubernetes deployments. These specifications include required Helm values and required custom resource definitions (CRDs), as well as required Consul components and their configurations. To learn more about Consul’s cluster peering feature, refer to cluster peering overview.

For cluster peering requirements in non-Kubernetes deployments, refer to cluster peering technical specifications.

General requirements

Make sure your Consul environment meets the following prerequisites:

  • Consul v1.14 or higher
  • Consul on Kubernetes v1.0.0 or higher
  • At least two Kubernetes clusters

You must also configure the following service mesh components in order to establish cluster peering connections:

Helm specifications

Consul’s default configuration supports cluster peering connections directly between clusters. In production environments, we recommend using mesh gateways to securely route service mesh traffic between partitions with cluster peering connections. The following values must be set in the Helm chart to enable mesh gateways:

Refer to the following example Helm configuration:

Technical Specifications - 图1

values.yaml

  1. global:
  2. name: consul
  3. image: "hashicorp/consul:1.16.0"
  4. peering:
  5. enabled: true
  6. tls:
  7. enabled: true
  8. meshGateway:
  9. enabled: true

After mesh gateways are enabled in the Helm chart, you can separately configure Mesh CRDs.

CRD specifications

You must create the following CRDs in order to establish a peering connection:

  • PeeringAcceptor: Generates a peering token and accepts an incoming peering connection.
  • PeeringDialer: Uses a peering token to make an outbound peering connection with the cluster that generated the token.

Refer to the following example CRDs:

Technical Specifications - 图2

acceptor.yaml

  1. apiVersion: consul.hashicorp.com/v1alpha1
  2. kind: PeeringAcceptor
  3. metadata:
  4. name: cluster-02 ## The name of the peer you want to connect to
  5. spec:
  6. peer:
  7. secret:
  8. name: "peering-token"
  9. key: "data"
  10. backend: "kubernetes"

Technical Specifications - 图3

dialer.yaml

  1. apiVersion: consul.hashicorp.com/v1alpha1
  2. kind: PeeringDialer
  3. metadata:
  4. name: cluster-01 ## The name of the peer you want to connect to
  5. spec:
  6. peer:
  7. secret:
  8. name: "peering-token"
  9. key: "data"
  10. backend: "kubernetes"

Mesh gateway specifications

To change Consul’s default configuration and enable cluster peering through mesh gateways, use a mesh configuration entry to update your network’s service mesh proxies globally:

  1. In cluster-01 create the Mesh custom resource with peeringThroughMeshGateways set to true.

    Technical Specifications - 图4

    mesh.yaml

    1. apiVersion: consul.hashicorp.com/v1alpha1
    2. kind: Mesh
    3. metadata:
    4. name: mesh
    5. spec:
    6. peering:
    7. peerThroughMeshGateways: true
  2. Apply the mesh CRD to cluster-01.

    1. $ kubectl --context $CLUSTER1_CONTEXT apply -f mesh.yaml
  3. Apply the mesh CRD to cluster-02.

    1. $ kubectl --context $CLUSTER2_CONTEXT apply -f mesh.yaml

Technical Specifications - 图5

Note

For help setting up the cluster context variables used in this example, refer to assign cluster IDs to environmental variables.

When cluster peering through mesh gateways, consider the following deployment requirements:

  • A Consul cluster requires a registered mesh gateway in order to export services to peers in other regions or cloud providers.
  • The mesh gateway must also be registered in the same admin partition as the exported services and their exported-services configuration entry. An enterprise license is required to use multiple admin partitions with a single cluster of Consul servers.
  • To use the local mesh gateway mode, you must register a mesh gateway in the importing cluster.
  • Define the Proxy.Config settings using opaque parameters compatible with your proxy. For additional Envoy proxy configuration information, refer to Gateway options and Escape-hatch overrides.

Mesh gateway modes

By default, all cluster peering connections use mesh gateways in remote mode. Be aware of these additional requirements when changing a mesh gateway’s mode.

  • For mesh gateways that connect peered clusters, you can set the mode as either remote or local.
  • The none mode is invalid for mesh gateways with cluster peering connections.

To learn how to change the mesh gateway mode to local on your Kubernetes deployment, refer to configure the mesh gateway mode for traffic between services.

Exported service specifications

The exported-services CRD is required in order for services to communicate across partitions with cluster peering connections. Basic guidance on using the exported-services configuration entry is included in Establish cluster peering connections.

Refer to exported-services configuration entry for more information.

ACL specifications

If ACLs are enabled, you must add tokens to grant the following permissions:

  • Grant service:write permissions to services that define mesh gateways in their server definition.
  • Grant service:read permissions for all services on the partition.
  • Grant mesh:write permissions to the mesh gateways that participate in cluster peering connections. This permission allows a leaf certificate to be issued for mesh gateways to terminate TLS sessions for HTTP requests.