Shared control plane (single and multiple networks)

Follow this guide to set up a multicluster Istio service mesh across multiple clusters with a shared control plane.

In this configuration, multiple Kubernetes remote clusters connect to a shared Istio control plane running in a primary cluster. Remote clusters can be in the same network as the primary cluster or in different networks. After one or more remote clusters are connected, the control plane of the primary cluster will manage the service mesh across all service endpoints.

Istio mesh spanning multiple Kubernetes clusters with direct network access to remote pods over VPN

Istio mesh spanning multiple Kubernetes clusters with direct network access to remote pods over VPN

Prerequisites

  • Two or more clusters running a supported Kubernetes version (1.16, 1.17, 1.18).

  • All Kubernetes control plane API servers must be routable to each other.

  • Clusters on the same network must be an RFC1918 network, VPN, or an alternative more advanced network technique meeting the following requirements:

    • Individual cluster Pod CIDR ranges and service CIDR ranges must be unique across the network and may not overlap.
    • All pod CIDRs in the same network must be routable to each other.
  • Clusters on different networks must have istio-ingressgateway services which are accessible from every other cluster, ideally using L4 network load balancers (NLB). Not all cloud providers support NLBs and some require special annotations to use them, so please consult your cloud provider’s documentation for enabling NLBs for service object type load balancers. When deploying on platforms without NLB support, it may be necessary to modify the health checks for the load balancer to register the ingress gateway.

Preparation

Certificate Authority

Generate intermediate CA certificates for each cluster’s CA from your organization’s root CA. The shared root CA enables mutual TLS communication across different clusters. For illustration purposes, the following instructions use the certificates from the Istio samples directory for both clusters.

Run the following commands on each cluster in the mesh to install the certificates. See Certificate Authority (CA) certificates for more details on configuring an external CA.

ZipZipZipZip

  1. $ kubectl create namespace istio-system
  2. $ kubectl create secret generic cacerts -n istio-system \
  3. --from-file=@samples/certs/ca-cert.pem@ \
  4. --from-file=@samples/certs/ca-key.pem@ \
  5. --from-file=@samples/certs/root-cert.pem@ \
  6. --from-file=@samples/certs/cert-chain.pem@

The root and intermediate certificate from the samples directory are widely distributed and known. Do not use these certificates in production as your clusters would then be open to security vulnerabilities and compromise.

Cross-cluster control plane access

Decide how to expose the primary cluster’s Istiod discovery service to the remote clusters. Pick one of the two options:

  • Option (1) - Use the istio-ingressgateway gateway shared with data traffic.

  • Option (2) - Use a cloud provider’s internal load balancer on the Istiod service. For additional requirements and restrictions that may apply when using an internal load balancer between clusters, see Kubernetes internal load balancer documentation and your cloud provider’s documentation.

Cluster and network naming

Determine the name of the clusters and networks in the mesh. These names will be used in the mesh network configuration and when configuring the mesh’s service registries. Assign a unique name to each cluster. The name must be a DNS label name. In the example below the primary cluster is called main0 and the remote cluster is remote0.

  1. $ export MAIN_CLUSTER_CTX=<...>
  2. $ export REMOTE_CLUSTER_CTX=<...>
  1. $ export MAIN_CLUSTER_NAME=main0
  2. $ export REMOTE_CLUSTER_NAME=remote0

If the clusters are on different networks, assign a unique network name for each network.

  1. $ export MAIN_CLUSTER_NETWORK=network1
  2. $ export REMOTE_CLUSTER_NETWORK=network2

If clusters are on the same network, the same network name is used for those clusters.

  1. $ export MAIN_CLUSTER_NETWORK=network1
  2. $ export REMOTE_CLUSTER_NETWORK=network1

Deployment

Primary cluster

Create the primary cluster’s configuration. Pick one of the two options for cross-cluster control plane access.

  1. cat <<EOF> istio-main-cluster.yaml
  2. apiVersion: install.istio.io/v1alpha1
  3. kind: IstioOperator
  4. spec:
  5. values:
  6. global:
  7. multiCluster:
  8. clusterName: ${MAIN_CLUSTER_NAME}
  9. network: ${MAIN_CLUSTER_NETWORK}
  10. # Mesh network configuration. This is optional and may be omitted if
  11. # all clusters are on the same network.
  12. meshNetworks:
  13. ${MAIN_CLUSTER_NETWORK}:
  14. endpoints:
  15. - fromRegistry: ${MAIN_CLUSTER_NAME}
  16. gateways:
  17. - registry_service_name: istio-ingressgateway.istio-system.svc.cluster.local
  18. port: 443
  19. ${REMOTE_CLUSTER_NETWORK}:
  20. endpoints:
  21. - fromRegistry: ${REMOTE_CLUSTER_NAME}
  22. gateways:
  23. - registry_service_name: istio-ingressgateway.istio-system.svc.cluster.local
  24. port: 443
  25. # Use the existing istio-ingressgateway.
  26. meshExpansion:
  27. enabled: true
  28. EOF
  1. cat <<EOF> istio-main-cluster.yaml
  2. apiVersion: install.istio.io/v1alpha1
  3. kind: IstioOperator
  4. spec:
  5. values:
  6. global:
  7. multiCluster:
  8. clusterName: ${MAIN_CLUSTER_NAME}
  9. network: ${MAIN_CLUSTER_NETWORK}
  10. # Mesh network configuration. This is optional and may be omitted if
  11. # all clusters are on the same network.
  12. meshNetworks:
  13. ${MAIN_CLUSTER_NETWORK}:
  14. endpoints:
  15. - fromRegistry: ${MAIN_CLUSTER_NAME}
  16. gateways:
  17. - registry_service_name: istio-ingressgateway.istio-system.svc.cluster.local
  18. port: 443
  19. ${REMOTE_CLUSTER_NETWORK}:
  20. endpoints:
  21. - fromRegistry: ${REMOTE_CLUSTER_NAME}
  22. gateways:
  23. - registry_service_name: istio-ingressgateway.istio-system.svc.cluster.local
  24. port: 443
  25. # Change the Istio service `type=LoadBalancer` and add the cloud provider specific annotations. See
  26. # https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer for more
  27. # information. The example below shows the configuration for GCP/GKE.
  28. components:
  29. pilot:
  30. k8s:
  31. service:
  32. type: LoadBalancer
  33. service_annotations:
  34. cloud.google.com/load-balancer-type: Internal
  35. EOF

Apply the primary cluster’s configuration.

  1. $ istioctl install -f istio-main-cluster.yaml --context=${MAIN_CLUSTER_CTX}

Wait for the control plane to be ready before proceeding.

  1. $ kubectl get pod -n istio-system --context=${MAIN_CLUSTER_CTX}
  2. NAME READY STATUS RESTARTS AGE
  3. istio-ingressgateway-7c8dd65766-lv9ck 1/1 Running 0 136m
  4. istiod-f756bbfc4-thkmk 1/1 Running 0 136m

Set the ISTIOD_REMOTE_EP environment variable based on which remote control plane configuration option was selected earlier.

  1. $ export ISTIOD_REMOTE_EP=$(kubectl get svc -n istio-system --context=${MAIN_CLUSTER_CTX} istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
  2. $ echo "ISTIOD_REMOTE_EP is ${ISTIOD_REMOTE_EP}"
  1. $ export ISTIOD_REMOTE_EP=$(kubectl get svc -n istio-system --context=${MAIN_CLUSTER_CTX} istiod -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
  2. $ echo "ISTIOD_REMOTE_EP is ${ISTIOD_REMOTE_EP}"

Remote cluster

Create the remote cluster’s configuration.

  1. cat <<EOF> istio-remote0-cluster.yaml
  2. apiVersion: install.istio.io/v1alpha1
  3. kind: IstioOperator
  4. spec:
  5. values:
  6. global:
  7. # The remote cluster's name and network name must match the values specified in the
  8. # mesh network configuration of the primary cluster.
  9. multiCluster:
  10. clusterName: ${REMOTE_CLUSTER_NAME}
  11. network: ${REMOTE_CLUSTER_NETWORK}
  12. # Replace ISTIOD_REMOTE_EP with the the value of ISTIOD_REMOTE_EP set earlier.
  13. remotePilotAddress: ${ISTIOD_REMOTE_EP}
  14. ## The istio-ingressgateway is not required in the remote cluster if both clusters are on
  15. ## the same network. To disable the istio-ingressgateway component, uncomment the lines below.
  16. #
  17. # components:
  18. # ingressGateways:
  19. # - name: istio-ingressgateway
  20. # enabled: false
  21. EOF

Apply the remote cluster configuration.

  1. $ istioctl install -f istio-remote0-cluster.yaml --context ${REMOTE_CLUSTER_CTX}

Wait for the remote cluster to be ready.

  1. $ kubectl get pod -n istio-system --context=${REMOTE_CLUSTER_CTX}
  2. NAME READY STATUS RESTARTS AGE
  3. istio-ingressgateway-55f784779d-s5hwl 1/1 Running 0 91m
  4. istiod-7b4bfd7b4f-fwmks 1/1 Running 0 91m

The istiod deployment running in the remote cluster is providing automatic sidecar injection and CA services to the remote cluster’s pods. These services were previously provided by the sidecar injector and Citadel deployments, which no longer exist with Istiod. The remote cluster’s pods are getting configuration from the primary cluster’s Istiod for service discovery.

Cross-cluster load balancing

Configure ingress gateways

Skip this next step and move onto configuring the service registries if both cluster are on the same network.

Cross-network traffic is securely routed through each destination cluster’s ingress gateway. When clusters in a mesh are on different networks you need to configure port 443 on the ingress gateway to pass incoming traffic through to the target service specified in a request’s SNI header, for SNI values of the local top-level domain (i.e., the Kubernetes DNS domain). Mutual TLS connections will be used all the way from the source to the destination sidecar.

Apply the following configuration to each cluster.

  1. cat <<EOF> cluster-aware-gateway.yaml
  2. apiVersion: networking.istio.io/v1alpha3
  3. kind: Gateway
  4. metadata:
  5. name: cluster-aware-gateway
  6. namespace: istio-system
  7. spec:
  8. selector:
  9. istio: ingressgateway
  10. servers:
  11. - port:
  12. number: 443
  13. name: tls
  14. protocol: TLS
  15. tls:
  16. mode: AUTO_PASSTHROUGH
  17. hosts:
  18. - "*.local"
  19. EOF
  1. $ kubectl apply -f cluster-aware-gateway.yaml --context=${MAIN_CLUSTER_CTX}
  2. $ kubectl apply -f cluster-aware-gateway.yaml --context=${REMOTE_CLUSTER_CTX}

Configure cross-cluster service registries

To enable cross-cluster load balancing, the Istio control plane requires access to all clusters in the mesh to discover services, endpoints, and pod attributes. To configure access, create a secret for each remote cluster with credentials to access the remote cluster’s kube-apiserver and install it in the primary cluster. This secret uses the credentials of the istio-reader-service-account in the remote cluster. --name specifies the remote cluster’s name. It must match the cluster name in primary cluster’s IstioOperator configuration.

  1. $ istioctl x create-remote-secret --name ${REMOTE_CLUSTER_NAME} --context=${REMOTE_CLUSTER_CTX} | \
  2. kubectl apply -f - --context=${MAIN_CLUSTER_CTX}

Do not create a remote secret for the local cluster running the Istio control plane. Istio is always aware of the local cluster’s Kubernetes credentials.

Deploy an example service

Deploy two instances of the helloworld service, one in each cluster. The difference between the two instances is the version of their helloworld image.

Deploy helloworld v2 in the remote cluster

  1. Create a sample namespace with a sidecar auto-injection label:

    1. $ kubectl create namespace sample --context=${REMOTE_CLUSTER_CTX}
    2. $ kubectl label namespace sample istio-injection=enabled --context=${REMOTE_CLUSTER_CTX}
  2. Deploy helloworld v2:

    ZipZip

    1. $ kubectl create -f @samples/helloworld/helloworld.yaml@ -l app=helloworld -n sample --context=${REMOTE_CLUSTER_CTX}
    2. $ kubectl create -f @samples/helloworld/helloworld.yaml@ -l version=v2 -n sample --context=${REMOTE_CLUSTER_CTX}
  3. Confirm helloworld v2 is running:

    1. $ kubectl get pod -n sample --context=${REMOTE_CLUSTER_CTX}
    2. NAME READY STATUS RESTARTS AGE
    3. helloworld-v2-7dd57c44c4-f56gq 2/2 Running 0 35s

Deploy helloworld v1 in the primary cluster

  1. Create a sample namespace with a sidecar auto-injection label:

    1. $ kubectl create namespace sample --context=${MAIN_CLUSTER_CTX}
    2. $ kubectl label namespace sample istio-injection=enabled --context=${MAIN_CLUSTER_CTX}
  2. Deploy helloworld v1:

    ZipZip

    1. $ kubectl create -f @samples/helloworld/helloworld.yaml@ -l app=helloworld -n sample --context=${MAIN_CLUSTER_CTX}
    2. $ kubectl create -f @samples/helloworld/helloworld.yaml@ -l version=v1 -n sample --context=${MAIN_CLUSTER_CTX}
  3. Confirm helloworld v1 is running:

    1. $ kubectl get pod -n sample --context=${MAIN_CLUSTER_CTX}
    2. NAME READY STATUS RESTARTS AGE
    3. helloworld-v1-d4557d97b-pv2hr 2/2 Running 0 40s

Cross-cluster routing in action

To demonstrate how traffic to the helloworld service is distributed across the two clusters, call the helloworld service from another in-mesh sleep service.

  1. Deploy the sleep service in both clusters:

    ZipZip

    1. $ kubectl apply -f @samples/sleep/sleep.yaml@ -n sample --context=${MAIN_CLUSTER_CTX}
    2. $ kubectl apply -f @samples/sleep/sleep.yaml@ -n sample --context=${REMOTE_CLUSTER_CTX}
  2. Wait for the sleep service to start in each cluster:

    1. $ kubectl get pod -n sample -l app=sleep --context=${MAIN_CLUSTER_CTX}
    2. sleep-754684654f-n6bzf 2/2 Running 0 5s
    1. $ kubectl get pod -n sample -l app=sleep --context=${REMOTE_CLUSTER_CTX}
    2. sleep-754684654f-dzl9j 2/2 Running 0 5s
  3. Call the helloworld.sample service several times from the primary cluster:

    1. $ kubectl exec -it -n sample -c sleep --context=${MAIN_CLUSTER_CTX} $(kubectl get pod -n sample -l app=sleep --context=${MAIN_CLUSTER_CTX} -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
  4. Call the helloworld.sample service several times from the remote cluster:

    1. $ kubectl exec -it -n sample -c sleep --context=${REMOTE_CLUSTER_CTX} $(kubectl get pod -n sample -l app=sleep --context=${REMOTE_CLUSTER_CTX} -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello

If set up correctly, the traffic to the helloworld.sample service will be distributed between instances on the main and remote clusters resulting in responses with either v1 or v2 in the body:

  1. Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
  2. Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv

You can also verify the IP addresses used to access the endpoints with istioctl proxy-config.

  1. $ kubectl get pod -n sample -l app=sleep --context=${MAIN_CLUSTER_CTX} -o name | cut -f2 -d'/' | \
  2. xargs -I{} istioctl -n sample --context=${MAIN_CLUSTER_CTX} proxy-config endpoints {} --cluster "outbound|5000||helloworld.sample.svc.cluster.local"
  3. ENDPOINT STATUS OUTLIER CHECK CLUSTER
  4. 10.10.0.90:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
  5. 192.23.120.32:443 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local

In the primary cluster, the endpoints are the gateway IP of the remote cluster (192.23.120.32:443) and the helloworld pod IP in the primary cluster (10.10.0.90:5000).

  1. $ kubectl get pod -n sample -l app=sleep --context=${REMOTE_CLUSTER_CTX} -o name | cut -f2 -d'/' | \
  2. xargs -I{} istioctl -n sample --context=${REMOTE_CLUSTER_CTX} proxy-config endpoints {} --cluster "outbound|5000||helloworld.sample.svc.cluster.local"
  3. ENDPOINT STATUS OUTLIER CHECK CLUSTER
  4. 10.32.0.9:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
  5. 192.168.1.246:443 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local

In the remote cluster, the endpoints are the gateway IP of the primary cluster (192.168.1.246:443) and the pod IP in the remote cluster (10.32.0.9:5000).

Congratulations!

You have configured a multi-cluster Istio mesh, installed samples and verified cross cluster traffic routing.

Additional considerations

Automatic injection

The Istiod service in each cluster provides automatic sidecar injection for proxies in its own cluster. Namespaces must be labeled in each cluster following the automatic sidecar injection guide

Access services from different clusters

Kubernetes resolves DNS on a cluster basis. Because the DNS resolution is tied to the cluster, you must define the service object in every cluster where a client runs, regardless of the location of the service’s endpoints. To ensure this is the case, duplicate the service object to every cluster using kubectl. Duplication ensures Kubernetes can resolve the service name in any cluster. Since the service objects are defined in a namespace, you must define the namespace if it doesn’t exist, and include it in the service definitions in all clusters.

Security

The Istiod service in each cluster provides CA functionality to proxies in its own cluster. The CA setup earlier ensures proxies across clusters in the mesh have the same root of trust.

Uninstalling the remote cluster

To uninstall the remote cluster, run the following command:

  1. $ istioctl x create-remote-secret --name ${REMOTE_CLUSTER_NAME} --context=${REMOTE_CLUSTER_CTX} | \
  2. kubectl delete -f - --context=${MAIN_CLUSTER_CTX}
  3. $ istioctl manifest generate -f istio-remote0-cluster.yaml --context=${REMOTE_CLUSTER_CTX} | \
  4. kubectl delete -f - --context=${REMOTE_CLUSTER_CTX}
  5. $ kubectl delete namespace sample --context=${REMOTE_CLUSTER_CTX}
  6. $ unset REMOTE_CLUSTER_CTX REMOTE_CLUSTER_NAME REMOTE_CLUSTER_NETWORK
  7. $ rm istio-remote0-cluster.yaml

To uninstall the primary cluster, run the following command:

  1. $ istioctl manifest generate -f istio-main-cluster.yaml --context=${MAIN_CLUSTER_CTX} | \
  2. kubectl delete -f - --context=${MAIN_CLUSTER_CTX}
  3. $ kubectl delete namespace sample --context=${MAIN_CLUSTER_CTX}
  4. $ unset MAIN_CLUSTER_CTX MAIN_CLUSTER_NAME MAIN_CLUSTER_NETWORK ISTIOD_REMOTE_EP
  5. $ rm istio-main-cluster.yaml cluster-aware-gateway.yaml

See also

Replicated control planes

Install an Istio mesh across multiple Kubernetes clusters with replicated control plane instances.

Multicluster Istio configuration and service discovery using Admiral

Automating Istio configuration for Istio deployments (clusters) that work as a single mesh.

Multi-Mesh Deployments for Isolation and Boundary Protection

Deploy environments that require isolation into separate meshes and enable inter-mesh communication by mesh federation.

DNS Certificate Management

Provision and manage DNS certificates in Istio.

Secure Webhook Management

A more secure way to manage Istio webhooks.

Secure Control of Egress Traffic in Istio, part 3

Comparison of alternative solutions to control egress traffic including performance considerations.