Install Istio with an External Control Plane

This guide walks you through the process of installing an external control plane and then connecting one or more remote clusters to it. The external control plane deployment model allows a mesh operator to install and manage a control plane on an external cluster, separate from the data plane cluster (or multiple clusters) comprising the mesh. This deployment model allows a clear separation between mesh operators and mesh administrators. Mesh operators install and manage Istio control planes while mesh admins only need to configure the mesh.

External control plane cluster and remote cluster

External control plane cluster and remote cluster

Envoy proxies (sidecars and gateways) running in the remote cluster access the external istiod via an ingress gateway which exposes the endpoints needed for discovery, CA, injection, and validation.

While configuration and management of the external control plane is done by the mesh operator in the external cluster, the first remote cluster connected to an external control plane serves as the config cluster for the mesh itself. The mesh administrator will use the config cluster to configure the mesh resources (gateways, virtual services, etc.) in addition to the mesh services themselves. The external control plane will remotely access this configuration from the Kubernetes API server, as shown in the above diagram.

Before you begin

Clusters

This guide requires that you have two Kubernetes clusters with any of the supported Kubernetes versions: 1.18, 1.19, 1.20, 1.21.

The first cluster will host the external control plane installed in the external-istiod namespace. An ingress gateway is also installed in the istio-system namespace to provide cross-cluster access to the external control plane.

The second cluster is a remote cluster that will run the mesh application workloads. Its Kubernetes API server also provides the mesh configuration used by the external control plane (istiod) to configure the workload proxies.

API server access

The Kubernetes API server in the remote cluster must be accessible to the external control plane cluster. Many cloud providers make API servers publicly accessible via network load balancers (NLBs). If the API server is not directly accessible, you will need to modify the installation procedure to enable access. For example, the east-west gateway used in a multicluster configuration could also be used to enable access to the API server.

Environment Variables

The following environment variables will be used throughout to simplify the instructions:

VariableDescription
CTX_EXTERNAL_CLUSTERThe context name in the default Kubernetes configuration file used for accessing the external control plane cluster.
CTX_REMOTE_CLUSTERThe context name in the default Kubernetes configuration file used for accessing the remote cluster.
REMOTE_CLUSTER_NAMEThe name of the remote cluster.
EXTERNAL_ISTIOD_ADDRThe hostname for the ingress gateway on the external control plane cluster. This is used by the remote cluster to access the external control plane.
SSL_SECRET_NAMEThe name of the secret that holds the TLS certs for the ingress gateway on the external control plane cluster.

Set the CTX_EXTERNAL_CLUSTER, CTX_REMOTE_CLUSTER, and REMOTE_CLUSTER_NAME now. You will set the others later.

  1. $ export CTX_EXTERNAL_CLUSTER=<your external cluster context>
  2. $ export CTX_REMOTE_CLUSTER=<your remote cluster context>
  3. $ export REMOTE_CLUSTER_NAME=<your remote cluster name>

Cluster configuration

Mesh operator steps

A mesh operator is responsible for installing and managing the external Istio control plane on the external cluster. This includes configuring an ingress gateway on the external cluster, which allows the remote cluster to access the control plane, and installing the sidecar injector webhook configuration on the remote cluster so that it will use the external control plane.

Set up a gateway in the external cluster

  1. Create the Istio install configuration for the ingress gateway that will expose the external control plane ports to other clusters:

    1. $ cat <<EOF > controlplane-gateway.yaml
    2. apiVersion: install.istio.io/v1alpha1
    3. kind: IstioOperator
    4. metadata:
    5. namespace: istio-system
    6. spec:
    7. components:
    8. ingressGateways:
    9. - name: istio-ingressgateway
    10. enabled: true
    11. k8s:
    12. service:
    13. ports:
    14. - port: 15021
    15. targetPort: 15021
    16. name: status-port
    17. - port: 15012
    18. targetPort: 15012
    19. name: tls-xds
    20. - port: 15017
    21. targetPort: 15017
    22. name: tls-webhook
    23. EOF

    Then, install the gateway in the istio-system namespace of the external cluster:

    1. $ istioctl install -f controlplane-gateway.yaml --context="${CTX_EXTERNAL_CLUSTER}"
  2. Run the following command to confirm that the ingress gateway is up and running:

    1. $ kubectl get po -n istio-system --context="${CTX_EXTERNAL_CLUSTER}"
    2. NAME READY STATUS RESTARTS AGE
    3. istio-ingressgateway-9d4c7f5c7-7qpzz 1/1 Running 0 29s
    4. istiod-68488cd797-mq8dn 1/1 Running 0 38s

    You will notice an istiod deployment is also created in the istio-system namespace. This is used to configure the ingress gateway and is NOT the control plane used by remote clusters.

    This ingress gateway could be configured to host multiple external control planes, in different namespaces on the external cluster, although in this example you will only deploy a single external istiod in the external-istiod namespace.

  3. Configure your environment to expose the Istio ingress gateway service using a public hostname with TLS. Set the EXTERNAL_ISTIOD_ADDR environment variable to the hostname and SSL_SECRET_NAME environment variable to the secret that holds the TLS certs:

    1. $ export EXTERNAL_ISTIOD_ADDR=<your external istiod host>
    2. $ export SSL_SECRET_NAME=<your external istiod secret>

Set up the remote config cluster

  1. Create the remote cluster’s Istio install configuration, which installs the injection webhook that uses the external control plane’s injector, instead of a locally deployed one. Because this cluster also serves as the config cluster, the Istio CRDs and istio configmap (i.e., global mesh config) are also installed by setting base.enabled and pilot.configMap to true:

    1. $ cat <<EOF > remote-config-cluster.yaml
    2. apiVersion: install.istio.io/v1alpha1
    3. kind: IstioOperator
    4. metadata:
    5. namespace: external-istiod
    6. spec:
    7. profile: external
    8. components:
    9. base:
    10. enabled: true
    11. values:
    12. global:
    13. istioNamespace: external-istiod
    14. pilot:
    15. configMap: true
    16. istiodRemote:
    17. injectionURL: https://${EXTERNAL_ISTIOD_ADDR}:15017/inject/:ENV:cluster=${REMOTE_CLUSTER_NAME}:ENV:net=network1
    18. base:
    19. validationURL: https://${EXTERNAL_ISTIOD_ADDR}:15017/validate
    20. EOF

    Then, install the configuration on the remote cluster:

    1. $ kubectl create namespace external-istiod --context="${CTX_REMOTE_CLUSTER}"
    2. $ istioctl manifest generate -f remote-config-cluster.yaml | kubectl apply --context="${CTX_REMOTE_CLUSTER}" -f -
  2. Confirm that the remote cluster’s webhook configuration has been installed:

    1. $ kubectl get mutatingwebhookconfiguration -n external-istiod --context="${CTX_REMOTE_CLUSTER}"
    2. NAME WEBHOOKS AGE
    3. istio-sidecar-injector-external-istiod 4 6m24s

Set up the control plane in the external cluster

  1. Create the external-istiod namespace, which will be used to host the external control plane:

    1. $ kubectl create namespace external-istiod --context="${CTX_EXTERNAL_CLUSTER}"
  2. The control plane in the external cluster needs access to the remote cluster to discover services, endpoints, and pod attributes. Create a secret with credentials to access the remote cluster’s kube-apiserver and install it in the external cluster:

    1. $ kubectl create sa istiod-service-account -n external-istiod --context="${CTX_EXTERNAL_CLUSTER}"
    2. $ istioctl x create-remote-secret \
    3. --context="${CTX_REMOTE_CLUSTER}" \
    4. --type=config \
    5. --namespace=external-istiod | \
    6. kubectl apply -f - --context="${CTX_EXTERNAL_CLUSTER}"
  3. Create the Istio configuration to install the control plane in the external-istiod namespace of the external cluster. Notice that istiod is configured to use the locally mounted istio configmap and the SHARED_MESH_CONFIG environment variable is set to istio. This instructs istiod to merge the values set by the mesh admin in the config cluster’s configmap with the values in the local configmap set by the mesh operator, here, which will take precedence if there are any conflicts:

    1. $ cat <<EOF > external-istiod.yaml
    2. apiVersion: install.istio.io/v1alpha1
    3. kind: IstioOperator
    4. metadata:
    5. namespace: external-istiod
    6. spec:
    7. profile: empty
    8. meshConfig:
    9. rootNamespace: external-istiod
    10. defaultConfig:
    11. discoveryAddress: $EXTERNAL_ISTIOD_ADDR:15012
    12. proxyMetadata:
    13. XDS_ROOT_CA: /etc/ssl/certs/ca-certificates.crt
    14. CA_ROOT_CA: /etc/ssl/certs/ca-certificates.crt
    15. components:
    16. pilot:
    17. enabled: true
    18. k8s:
    19. overlays:
    20. - kind: Deployment
    21. name: istiod
    22. patches:
    23. - path: spec.template.spec.volumes[100]
    24. value: |-
    25. name: config-volume
    26. configMap:
    27. name: istio
    28. - path: spec.template.spec.volumes[100]
    29. value: |-
    30. name: inject-volume
    31. configMap:
    32. name: istio-sidecar-injector
    33. - path: spec.template.spec.containers[0].volumeMounts[100]
    34. value: |-
    35. name: config-volume
    36. mountPath: /etc/istio/config
    37. - path: spec.template.spec.containers[0].volumeMounts[100]
    38. value: |-
    39. name: inject-volume
    40. mountPath: /var/lib/istio/inject
    41. env:
    42. - name: INJECTION_WEBHOOK_CONFIG_NAME
    43. value: ""
    44. - name: VALIDATION_WEBHOOK_CONFIG_NAME
    45. value: ""
    46. - name: EXTERNAL_ISTIOD
    47. value: "true"
    48. - name: CLUSTER_ID
    49. value: ${REMOTE_CLUSTER_NAME}
    50. - name: SHARED_MESH_CONFIG
    51. value: istio
    52. values:
    53. global:
    54. caAddress: $EXTERNAL_ISTIOD_ADDR:15012
    55. istioNamespace: external-istiod
    56. operatorManageWebhooks: true
    57. meshID: mesh1
    58. EOF

    Then, apply the Istio configuration on the external cluster:

    1. $ istioctl install -f external-istiod.yaml --context="${CTX_EXTERNAL_CLUSTER}"
  4. Confirm that the external istiod has been successfully deployed:

    1. $ kubectl get po -n external-istiod --context="${CTX_EXTERNAL_CLUSTER}"
    2. NAME READY STATUS RESTARTS AGE
    3. istiod-779bd6fdcf-bd6rg 1/1 Running 0 70s
  5. Create the Istio Gateway, VirtualService, and DestinationRule configuration to route traffic from the ingress gateway to the external control plane:

    1. $ cat <<EOF > external-istiod-gw.yaml
    2. apiVersion: networking.istio.io/v1beta1
    3. kind: Gateway
    4. metadata:
    5. name: external-istiod-gw
    6. namespace: external-istiod
    7. spec:
    8. selector:
    9. istio: ingressgateway
    10. servers:
    11. - port:
    12. number: 15012
    13. protocol: https
    14. name: https-XDS
    15. tls:
    16. mode: SIMPLE
    17. credentialName: $SSL_SECRET_NAME
    18. hosts:
    19. - $EXTERNAL_ISTIOD_ADDR
    20. - port:
    21. number: 15017
    22. protocol: https
    23. name: https-WEBHOOK
    24. tls:
    25. mode: SIMPLE
    26. credentialName: $SSL_SECRET_NAME
    27. hosts:
    28. - $EXTERNAL_ISTIOD_ADDR
    29. ---
    30. apiVersion: networking.istio.io/v1beta1
    31. kind: VirtualService
    32. metadata:
    33. name: external-istiod-vs
    34. namespace: external-istiod
    35. spec:
    36. hosts:
    37. - $EXTERNAL_ISTIOD_ADDR
    38. gateways:
    39. - external-istiod-gw
    40. http:
    41. - match:
    42. - port: 15012
    43. route:
    44. - destination:
    45. host: istiod.external-istiod.svc.cluster.local
    46. port:
    47. number: 15012
    48. - match:
    49. - port: 15017
    50. route:
    51. - destination:
    52. host: istiod.external-istiod.svc.cluster.local
    53. port:
    54. number: 443
    55. ---
    56. apiVersion: networking.istio.io/v1alpha3
    57. kind: DestinationRule
    58. metadata:
    59. name: external-istiod-dr
    60. namespace: external-istiod
    61. spec:
    62. host: istiod.external-istiod.svc.cluster.local
    63. trafficPolicy:
    64. portLevelSettings:
    65. - port:
    66. number: 15012
    67. tls:
    68. mode: SIMPLE
    69. connectionPool:
    70. http:
    71. h2UpgradePolicy: UPGRADE
    72. - port:
    73. number: 443
    74. tls:
    75. mode: SIMPLE
    76. EOF

    Then, apply the configuration on the external cluster:

    1. $ kubectl apply -f external-istiod-gw.yaml --context="${CTX_EXTERNAL_CLUSTER}"

Mesh admin steps

Now that Istio is up and running, a mesh administrator only needs to deploy and configure services in the mesh, including gateways, if needed.

Deploy a sample application

  1. Create, and label for injection, the sample namespace on the remote cluster:

    1. $ kubectl create --context="${CTX_REMOTE_CLUSTER}" namespace sample
    2. $ kubectl label --context="${CTX_REMOTE_CLUSTER}" namespace sample istio-injection=enabled
  2. Deploy the helloworld (v1) and sleep samples:

    ZipZipZip

    1. $ kubectl apply -f @samples/helloworld/helloworld.yaml@ -l service=helloworld -n sample --context="${CTX_REMOTE_CLUSTER}"
    2. $ kubectl apply -f @samples/helloworld/helloworld.yaml@ -l version=v1 -n sample --context="${CTX_REMOTE_CLUSTER}"
    3. $ kubectl apply -f @samples/sleep/sleep.yaml@ -n sample --context="${CTX_REMOTE_CLUSTER}"
  3. Wait a few seconds for the helloworld and sleep pods to be running with sidecars injected:

    1. $ kubectl get pod -n sample --context="${CTX_REMOTE_CLUSTER}"
    2. NAME READY STATUS RESTARTS AGE
    3. helloworld-v1-776f57d5f6-s7zfc 2/2 Running 0 10s
    4. sleep-64d7d56698-wqjnm 2/2 Running 0 9s
  4. Send a request from the sleep pod to the helloworld service:

    1. $ kubectl exec --context="${CTX_REMOTE_CLUSTER}" -n sample -c sleep \
    2. "$(kubectl get pod --context="${CTX_REMOTE_CLUSTER}" -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}')" \
    3. -- curl -sS helloworld.sample:5000/hello
    4. Hello version: v1, instance: helloworld-v1-776f57d5f6-s7zfc

Enable gateways

  1. Enable an ingress gateway on the remote cluster:

    1. $ cat <<EOF > istio-ingressgateway.yaml
    2. apiVersion: operator.istio.io/v1alpha1
    3. kind: IstioOperator
    4. spec:
    5. profile: empty
    6. components:
    7. ingressGateways:
    8. - namespace: external-istiod
    9. name: istio-ingressgateway
    10. enabled: true
    11. values:
    12. gateways:
    13. istio-ingressgateway:
    14. injectionTemplate: gateway
    15. EOF
    16. $ istioctl install -f istio-ingressgateway.yaml --context="${CTX_REMOTE_CLUSTER}"
  2. Enable an egress gateway, or other gateways, on the remote cluster (optional):

    1. $ cat <<EOF > istio-egressgateway.yaml
    2. apiVersion: operator.istio.io/v1alpha1
    3. kind: IstioOperator
    4. spec:
    5. profile: empty
    6. components:
    7. egressGateways:
    8. - namespace: external-istiod
    9. name: istio-egressgateway
    10. enabled: true
    11. values:
    12. gateways:
    13. istio-egressgateway:
    14. injectionTemplate: gateway
    15. EOF
    16. $ istioctl install -f istio-egressgateway.yaml --context="${CTX_REMOTE_CLUSTER}"
  3. Confirm that the Istio ingress gateway is running:

    1. $ kubectl get pod -l app=istio-ingressgateway -n external-istiod --context="${CTX_REMOTE_CLUSTER}"
    2. NAME READY STATUS RESTARTS AGE
    3. istio-ingressgateway-7bcd5c6bbd-kmtl4 1/1 Running 0 8m4s
  4. Expose the helloworld application on the ingress gateway:

    Zip

    1. $ kubectl apply -f @samples/helloworld/helloworld-gateway.yaml@ -n sample --context="${CTX_REMOTE_CLUSTER}"
  5. Set the GATEWAY_URL environment variable (see determining the ingress IP and ports for details):

    1. $ export INGRESS_HOST=$(kubectl -n external-istiod --context="${CTX_REMOTE_CLUSTER}" get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    2. $ export INGRESS_PORT=$(kubectl -n external-istiod --context="${CTX_REMOTE_CLUSTER}" get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
    3. $ export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
  6. Confirm you can access the helloworld application through the ingress gateway:

    1. $ curl -s "http://${GATEWAY_URL}/hello"
    2. Hello version: v1, instance: helloworld-v1-776f57d5f6-s7zfc

Adding clusters to the mesh (optional)

This feature is actively in development and is considered experimental.

This section shows you how to expand an existing external control plane mesh to multicluster by adding another remote cluster. This allows you to easily distribute services and use location-aware routing and fail over to support high availability of your application.

External control plane with multiple remote clusters

External control plane with multiple remote clusters

Unlike the first remote cluster, the second and subsequent clusters added to the same external control plane do not provide mesh config, but instead are only sources of endpoint configuration, just like remote clusters in a primary-remote Istio multicluster configuration.

To proceed, you’ll need another Kubernetes cluster for the second remote cluster of the mesh. Set the following environment variables to the context name and cluster name of the cluster:

  1. $ export CTX_SECOND_CLUSTER=<your second remote cluster context>
  2. $ export SECOND_CLUSTER_NAME=<your second remote cluster name>

Register the new cluster

  1. Create a secret with credentials to allow the control plane to access the endpoints on the second remote cluster and install it:

    1. $ istioctl x create-remote-secret \
    2. --context="${CTX_SECOND_CLUSTER}" \
    3. --name="${SECOND_CLUSTER_NAME}" \
    4. --type=remote \
    5. --namespace=external-istiod | \
    6. kubectl apply -f - --context="${CTX_REMOTE_CLUSTER}" #TODO use --context="{CTX_EXTERNAL_CLUSTER}" when #31946 is fixed.

    Note that unlike the first remote cluster of the mesh, which also serves as the config cluster, the --type argument is set to remote this time, instead of config.

    Note that the new secret can be applied in either the remote (config) cluster or in the external cluster, because the external istiod is watching for additions in both clusters.

  2. Create the remote Istio install configuration, which installs the injection webhook that uses the external control plane’s injector, instead of a locally deployed one:

    1. $ cat <<EOF > second-config-cluster.yaml
    2. apiVersion: install.istio.io/v1alpha1
    3. kind: IstioOperator
    4. metadata:
    5. namespace: external-istiod
    6. spec:
    7. profile: external
    8. values:
    9. global:
    10. istioNamespace: external-istiod
    11. istiodRemote:
    12. injectionURL: https://${EXTERNAL_ISTIOD_ADDR}:15017/inject/:ENV:cluster=${SECOND_CLUSTER_NAME}:ENV:net=network2
    13. EOF

    Then, install the configuration on the remote cluster:

    1. $ istioctl manifest generate -f second-config-cluster.yaml | kubectl apply --context="${CTX_SECOND_CLUSTER}" -f -
  3. Confirm that the remote cluster’s webhook configuration has been installed:

    1. $ kubectl get mutatingwebhookconfiguration -n external-istiod --context="${CTX_SECOND_CLUSTER}"
    2. NAME WEBHOOKS AGE
    3. istio-sidecar-injector-external-istiod 4 4m13s

Setup east-west gateways

  1. Deploy east-west gateways on both remote clusters:

    Zip

    1. $ @samples/multicluster/gen-eastwest-gateway.sh@ \
    2. --mesh mesh1 --cluster "${REMOTE_CLUSTER_NAME}" --network network1 > eastwest-gateway-1.yaml
    3. $ istioctl manifest generate -f eastwest-gateway-1.yaml \
    4. --set values.gateways.istio-ingressgateway.injectionTemplate=gateway \
    5. --set values.global.istioNamespace=external-istiod | \
    6. kubectl apply --context="${CTX_REMOTE_CLUSTER}" -f -

    Zip

    1. $ @samples/multicluster/gen-eastwest-gateway.sh@ \
    2. --mesh mesh1 --cluster "${SECOND_CLUSTER_NAME}" --network network2 > eastwest-gateway-2.yaml
    3. $ istioctl manifest generate -f eastwest-gateway-2.yaml \
    4. --set values.gateways.istio-ingressgateway.injectionTemplate=gateway \
    5. --set values.global.istioNamespace=external-istiod | \
    6. kubectl apply --context="${CTX_SECOND_CLUSTER}" -f -
  2. Wait for the east-west gateways to be assigned external IP addresses:

    1. $ kubectl --context="${CTX_REMOTE_CLUSTER}" get svc istio-eastwestgateway -n external-istiod
    2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    3. istio-eastwestgateway LoadBalancer 10.0.12.121 34.122.91.98 ... 51s
    1. $ kubectl --context="${CTX_SECOND_CLUSTER}" get svc istio-eastwestgateway -n external-istiod
    2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    3. istio-eastwestgateway LoadBalancer 10.0.12.121 34.122.91.99 ... 51s
  3. Expose services via the east-west gateways:

    Zip

    1. $ kubectl --context="${CTX_REMOTE_CLUSTER}" apply -n external-istiod -f \
    2. @samples/multicluster/expose-services.yaml@

    Zip

    1. $ kubectl --context="${CTX_SECOND_CLUSTER}" apply -n external-istiod -f \
    2. @samples/multicluster/expose-services.yaml@

Validate the installation

  1. Create, and label for injection, the sample namespace on the remote cluster:

    1. $ kubectl create --context="${CTX_SECOND_CLUSTER}" namespace sample
    2. $ kubectl label --context="${CTX_SECOND_CLUSTER}" namespace sample istio-injection=enabled
  2. Deploy the helloworld (v2) and sleep samples:

    ZipZipZip

    1. $ kubectl apply -f @samples/helloworld/helloworld.yaml@ -l service=helloworld -n sample --context="${CTX_SECOND_CLUSTER}"
    2. $ kubectl apply -f @samples/helloworld/helloworld.yaml@ -l version=v2 -n sample --context="${CTX_SECOND_CLUSTER}"
    3. $ kubectl apply -f @samples/sleep/sleep.yaml@ -n sample --context="${CTX_SECOND_CLUSTER}"
  3. Wait a few seconds for the helloworld and sleep pods to be running with sidecars injected:

    1. $ kubectl get pod -n sample --context="${CTX_SECOND_CLUSTER}"
    2. NAME READY STATUS RESTARTS AGE
    3. helloworld-v2-54df5f84b-9hxgw 2/2 Running 0 10s
    4. sleep-557747455f-wtdbr 2/2 Running 0 9s
  4. Send a request from the sleep pod to the helloworld service:

    1. $ kubectl exec --context="${CTX_SECOND_CLUSTER}" -n sample -c sleep \
    2. "$(kubectl get pod --context="${CTX_SECOND_CLUSTER}" -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}')" \
    3. -- curl -sS helloworld.sample:5000/hello
    4. Hello version: v2, instance: helloworld-v2-54df5f84b-9hxgw
  5. Confirm that when accessing the helloworld application several times through the ingress gateway, both version v1 and v2 are now being called:

    1. $ for i in {1..10}; do curl -s "http://${GATEWAY_URL}/hello"; done
    2. Hello version: v1, instance: helloworld-v1-776f57d5f6-s7zfc
    3. Hello version: v2, instance: helloworld-v2-54df5f84b-9hxgw
    4. Hello version: v1, instance: helloworld-v1-776f57d5f6-s7zfc
    5. Hello version: v2, instance: helloworld-v2-54df5f84b-9hxgw
    6. ...