Installing Multicluster

Multicluster support in Linkerd requires extra installation and configuration on top of the default control plane installation. This guide walks through this installation and configuration as well as common problems that you may encounter. For a detailed walkthrough and explanation of what’s going on, check out getting started.

If you’d like to use an existing Ambassador installation, check out the leverage instructions. Alternatively, check out the Ambassador documentation for a more detailed explanation of the configuration and what’s going on.

Requirements

  • Two clusters.
  • A control plane installation in each cluster that shares a common trust anchor. If you have an existing installation, see the trust anchor bundle documentation to understand what is required.
  • Each of these clusters should be configured as kubectl contexts.
  • Elevated privileges on both clusters. We’ll be creating service accounts and granting extended privileges, so you’ll need to be able to do that on your test clusters.
  • Support for services of type LoadBalancer in the east cluster. Check out the documentation for your cluster provider or take a look at inlets. This is what the west cluster will use to communicate with east via the gateway.

Step 1: Install the multicluster control plane

On each cluster, run:

  1. linkerd multicluster install | \
  2. kubectl apply -f -

To verify that everything has started up successfully, run:

  1. linkerd check --multicluster

For a deep dive into what components are being added to your cluster and how all the pieces fit together, check out the getting started documentation.

Step 2: Link the clusters

Each cluster must be linked. This consists of installing several resources in the source cluster including a secret containing a kubeconfig that allows access to the target cluster Kubernetes API, a service mirror control for mirroring services, and a Link custom resource for holding configuration. To link cluster west to cluster east, you would run:

  1. linkerd --context=east multicluster link --cluster-name east |
  2. kubectl --context=west apply -f -

To verify that the credentials were created successfully and the clusters are able to reach each other, run:

  1. linkerd --context=west check --multicluster

You should also see the list of gateways show up by running:

  1. linkerd --context=west multicluster gateways

For a detailed explanation of what this step does, check out the linking the clusters section.

Step 3: Export services

Services are not automatically mirrored in linked clusters. By default, only services with the mirror.linkerd.io/exported label will be mirrored. For each service you would like mirrored to linked clusters, run:

  1. kubectl label svc foobar mirror.linkerd.io/exported=true

Note

You can configure a different label selector by using the --selector flag on the linkerd multicluster link command or by editing the Link resource created by the linkerd multicluster link command.

Leverage Ambassador

The bundled Linkerd gateway is not required. In fact, if you have an existing Ambassador installation, it is easy to use it instead! By using your existing Ambassador installation, you avoid needing to manage multiple ingress gateways and pay for extra cloud load balancers. This guide assumes that Ambassador has been installed into the ambassador namespace.

First, you’ll want to inject the ambassador deployment with Linkerd:

  1. kubectl -n ambassador get deploy ambassador -o yaml | \
  2. linkerd inject \
  3. --skip-inbound-ports 80,443 \
  4. --require-identity-on-inbound-ports 4183 - | \
  5. kubectl apply -f -

This will add the Linkerd proxy, skip the ports that Ambassador is handling for public traffic and require identity on the gateway port. Check out the docs to understand why it is important to require identity on the gateway port.

Next, you’ll want to add some configuration so that Ambassador knows how to handle requests:

  1. cat <<EOF | kubectl --context=${ctx} apply -f -
  2. ---
  3. apiVersion: getambassador.io/v2
  4. kind: Module
  5. metadata:
  6. name: ambassador
  7. namespace: ambassador
  8. spec:
  9. config:
  10. add_linkerd_headers: true
  11. ---
  12. apiVersion: getambassador.io/v2
  13. kind: Host
  14. metadata:
  15. name: wildcard
  16. namespace: ambassador
  17. spec:
  18. hostname: "*"
  19. selector:
  20. matchLabels:
  21. nothing: nothing
  22. acmeProvider:
  23. authority: none
  24. requestPolicy:
  25. insecure:
  26. action: Route
  27. ---
  28. apiVersion: getambassador.io/v2
  29. kind: Mapping
  30. metadata:
  31. name: public-health-check
  32. namespace: ambassador
  33. spec:
  34. prefix: /-/ambassador/ready
  35. rewrite: /ambassador/v0/check_ready
  36. service: localhost:8877
  37. bypass_auth: true
  38. EOF

The Ambassador service and deployment definitions need to be patched a little bit. This adds metadata required by the service mirror controller. To get these resources patched, run:

  1. kubectl --context=${ctx} -n ambassador patch deploy ambassador -p='
  2. spec:
  3. template:
  4. metadata:
  5. annotations:
  6. config.linkerd.io/enable-gateway: "true"
  7. '
  8. kubectl --context=${ctx} -n ambassador patch svc ambassador --type='json' -p='[
  9. {"op":"add","path":"/spec/ports/-", "value":{"name": "mc-gateway", "port": 4143}},
  10. {"op":"replace","path":"/spec/ports/0", "value":{"name": "mc-probe", "port": 80, "targetPort": 8080}}
  11. ]'
  12. kubectl --context=${ctx} -n ambassador patch svc ambassador -p='
  13. metadata:
  14. annotations:
  15. mirror.linkerd.io/gateway-identity: ambassador.ambassador.serviceaccount.identity.linkerd.cluster.local
  16. mirror.linkerd.io/multicluster-gateway: "true"
  17. mirror.linkerd.io/probe-path: /-/ambassador/ready
  18. mirror.linkerd.io/probe-period: "3"
  19. '

Now you can install the Linkerd multicluster components onto your target cluster. Since we’re using Ambassador as our gateway, we need to skip installing the Linkerd gateway by using the --gateway=false flag:

  1. linkerd --context=${ctx} multicluster install --gateway=false | kubectl --context=${ctx} apply -f -

With everything setup and configured, you’re ready to link a source cluster to this Ambassador gateway. Run the link command specifying the name and namespace of your Ambassador service:

  1. linkerd --context=${ctx} multicluster link --cluster-name=${ctx} --gateway-name=ambassador --gateway-namespace=ambassador \
  2. | kubectl --context=${src_ctx} apply -f -

From the source cluster (the one not running Ambassador), you can validate that everything is working correctly by running:

  1. linkerd check --multicluster

Additionally, the ambassador gateway will show up when listing the active gateways:

  1. linkerd multicluster gateways

Trust Anchor Bundle

To secure the connections between clusters, Linkerd requires that there is a shared trust anchor. This allows the control plane to encrypt the requests that go between clusters and verify the identity of those requests. This identity is used to control access to clusters, so it is critical that the trust anchor is shared.

The easiest way to do this is to have a single trust anchor certificate shared between multiple clusters. If you have an existing Linkerd installation and have thrown away the trust anchor key, it might not be possible to have a single certificate for the trust anchor. Luckily, the trust anchor can be a bundle of certificates as well!

To fetch your existing cluster’s trust anchor, run:

  1. kubectl -n linkerd get cm linkerd-config -ojsonpath="{.data.values}" | \
  2. yq -r .global.identityTrustAnchorsPEM > trustAnchor.crt

Note

This command requires yq. If you don’t have yq, feel free to extract the certificate from the global.identityTrustAnchorsPEM field with your tool of choice.

Now, you’ll want to create a new trust anchor and issuer for the new cluster:

  1. step certificate create identity.linkerd.cluster.local root.crt root.key \
  2. --profile root-ca --no-password --insecure --san identity.linkerd.cluster.local
  3. step certificate create identity.linkerd.cluster.local issuer.crt issuer.key \
  4. --profile intermediate-ca --not-after 8760h --no-password --insecure \
  5. --ca root.crt --ca-key root.key --san identity.linkerd.cluster.local

Note

We use the step cli to generate certificates. openssl works just as well!

With the old cluster’s trust anchor and the new cluster’s trust anchor, you can create a bundle by running:

  1. cat trustAnchor.crt root.crt > bundle.crt

You’ll want to upgrade your existing cluster with the new bundle. Make sure every pod you’d like to have talk to the new cluster is restarted so that it can use this bundle. To upgrade the existing cluster with this new trust anchor bundle, run:

  1. linkerd upgrade --identity-trust-anchors-file=./bundle.crt | \
  2. kubectl apply -f -

Finally, you’ll be able to install Linkerd on the new cluster by using the trust anchor bundle that you just created along with the issuer certificate and key.

  1. linkerd install \
  2. --identity-trust-anchors-file bundle.crt \
  3. --identity-issuer-certificate-file issuer.crt \
  4. --identity-issuer-key-file issuer.key | \
  5. kubectl apply -f -

Make sure to verify that the cluster’s have started up successfully by running check on each one.

  1. linkerd check

Installing the multicluster control plane components through Helm

Linkerd’s multicluster components i.e Gateway and Service Mirror can be installed via Helm rather than the linkerd multicluster install command.

This not only allows advanced configuration, but also allows users to bundle the multicluster installation as part of their existing Helm based installation pipeline.

Adding Linkerd’s Helm repository

First, let’s add the Linkerd’s Helm repository by running

  1. # To add the repo for Linkerd2 stable releases:
  2. helm repo add linkerd https://helm.linkerd.io/stable

Helm multicluster install procedure

  1. helm install linkerd2-multicluster linkerd/linkerd2-multicluster

The chart values will be picked from the chart’s values.yaml file.

You can override the values in that file by providing your own values.yaml file passed with a -f option, or overriding specific values using the family of --set flags.

Full set of configuration options can be found here

The installation can be verified by running

  1. linkerd check --multicluster

Installation of the gateway can be disabled with the gateway setting. By default this value is true.

Installing additional access credentials

When the multicluster components are installed onto a target cluster with linkerd multicluster install, a service account is created which source clusters will use to mirror services. Using a distinct service account for each source cluster can be benefitial since it gives you the ability to revoke service mirroring access from specific source clusters. Generating additional service accounts and associated RBAC can be done using the linkerd multicluster allow command through the CLI.

The same functionality can also be done through Helm setting the remoteMirrorServiceAccountName value to a list.

  1. helm install linkerd2-mc-source linkerd/linkerd2-multicluster --set remoteMirrorServiceAccountName={source1\,source2\,source3} --kube-context target

Now that the multicluster components are installed, operations like linking, etc can be performed by using the linkerd CLI’s multicluster sub-command as per the multicluster task.