Install Primary-Remote

Follow this guide to install the Istio control plane on cluster1 (the primary cluster) and configure cluster2 (the remote cluster) to use the control plane in cluster1. Both clusters reside on the network1 network, meaning there is direct connectivity between the pods in both clusters.

Before proceeding, be sure to complete the steps under before you begin.

If you are testing multicluster setup on kind you can use MetalLB to make use of EXTERNAL-IP for LoadBalancer services.

In this configuration, cluster cluster1 will observe the API Servers in both clusters for endpoints. In this way, the control plane will be able to provide service discovery for workloads in both clusters.

Service workloads communicate directly (pod-to-pod) across cluster boundaries.

Services in cluster2 will reach the control plane in cluster1 via a dedicated gateway for east-west traffic.

Primary and remote clusters on the same network

Primary and remote clusters on the same network

Today, the remote profile will install an istiod server in the remote cluster which will be used for CA and webhook injection for workloads in that cluster. Service discovery, however, will be directed to the control plane in the primary cluster.

Future releases will remove the need for having an istiod in the remote cluster altogether. Stay tuned!

Configure cluster1 as a primary

Create the Istio configuration for cluster1:

  1. $ cat <<EOF > cluster1.yaml
  2. apiVersion: install.istio.io/v1alpha1
  3. kind: IstioOperator
  4. spec:
  5. values:
  6. global:
  7. meshID: mesh1
  8. multiCluster:
  9. clusterName: cluster1
  10. network: network1
  11. EOF

Apply the configuration to cluster1:

  1. $ istioctl install --context="${CTX_CLUSTER1}" -f cluster1.yaml

Install the east-west gateway in cluster1

Install a gateway in cluster1 that is dedicated to east-west traffic. By default, this gateway will be public on the Internet. Production systems may require additional access restrictions (e.g. via firewall rules) to prevent external attacks. Check with your cloud vendor to see what options are available.

Zip

  1. $ @samples/multicluster/gen-eastwest-gateway.sh@ \
  2. --mesh mesh1 --cluster cluster1 --network network1 | \
  3. istioctl --context="${CTX_CLUSTER1}" install -y -f -

If the control-plane was installed with a revision, add the --revision rev flag to the gen-eastwest-gateway.sh command.

Wait for the east-west gateway to be assigned an external IP address:

  1. $ kubectl --context="${CTX_CLUSTER1}" get svc istio-eastwestgateway -n istio-system
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. istio-eastwestgateway LoadBalancer 10.80.6.124 34.75.71.237 ... 51s

Expose the control plane in cluster1

Before we can install on cluster2, we need to first expose the control plane in cluster1 so that services in cluster2 will be able to access service discovery:

Zip

  1. $ kubectl apply --context="${CTX_CLUSTER1}" -f \
  2. @samples/multicluster/expose-istiod.yaml@

Enable API Server Access to cluster2

Before we can configure the remote cluster, we first have to give the control plane in cluster1 access to the API Server in cluster2. This will do the following:

  • Enables the control plane to authenticate connection requests from workloads running in cluster2. Without API Server access, the control plane will reject the requests.

  • Enables discovery of service endpoints running in cluster2.

To provide API Server access to cluster2, we generate a remote secret and apply it to cluster1:

  1. $ istioctl x create-remote-secret \
  2. --context="${CTX_CLUSTER2}" \
  3. --name=cluster2 | \
  4. kubectl apply -f - --context="${CTX_CLUSTER1}"

Configure cluster2 as a remote

Save the address of cluster1’s east-west gateway.

  1. $ export DISCOVERY_ADDRESS=$(kubectl \
  2. --context="${CTX_CLUSTER1}" \
  3. -n istio-system get svc istio-eastwestgateway \
  4. -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

Now create a remote configuration for cluster2.

  1. $ cat <<EOF > cluster2.yaml
  2. apiVersion: install.istio.io/v1alpha1
  3. kind: IstioOperator
  4. spec:
  5. profile: remote
  6. values:
  7. global:
  8. meshID: mesh1
  9. multiCluster:
  10. clusterName: cluster2
  11. network: network1
  12. remotePilotAddress: ${DISCOVERY_ADDRESS}
  13. EOF

Apply the configuration to cluster2:

  1. $ istioctl install --context="${CTX_CLUSTER2}" -f cluster2.yaml

Congratulations! You successfully installed an Istio mesh across primary and remote clusters!

Next Steps

You can now verify the installation.

See also

Before you begin

Initial steps before configuring locality load balancing.

Before you begin

Initial steps before installing Istio on multiple clusters.

Install Multi-Primary

Install an Istio mesh across multiple primary clusters.

Install Multi-Primary on different networks

Install an Istio mesh across multiple primary clusters on different networks.

Install Primary-Remote on different networks

Install an Istio mesh across primary and remote clusters on different networks.

Locality failover

This task demonstrates how to configure your mesh for locality failover.