Deploy a multi-zone global control plane

Prerequisites

To set up a multi-zone deployment we will need to:

Usage

Set up the global control plane

The global control plane must run on a dedicated cluster (unless using “Universal on Kubernetes” mode), and cannot be assigned to a zone.

The global control plane on Kubernetes must reside on its own Kubernetes cluster, to keep its resources separate from the resources the zone control planes create during synchronization.

  1. Run:

    1. kumactl install control-plane --mode=global | kubectl apply -f -
  2. Find the external IP and port of the global-remote-sync service in the kuma-system namespace:

    1. kubectl get services -n kuma-system
    2. NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    3. kuma-system global-remote-sync LoadBalancer 10.105.9.10 35.226.196.103 5685:30685/TCP 89s
    4. kuma-system kuma-control-plane ClusterIP 10.105.12.133 <none> 5681/TCP,443/TCP,5676/TCP,5677/TCP,5678/TCP,5679/TCP,5682/TCP,5653/UDP 90s

    In this example the value is 35.226.196.103:5685. You pass this as the value of <global-kds-address> when you set up the zone control planes.

  3. Set the controlPlane.mode value to global in the chart (values.yaml), then install. On the command line, run:

    1. helm install kuma --create-namespace --namespace kuma-system --set controlPlane.mode=global kuma/kuma

    Or you can edit the chart and pass the file to the helm install kuma command. To get the default values, run:

    1. helm show values kuma/kuma
  4. Find the external IP and port of the global-remote-sync service in the kuma-system namespace:

    1. kubectl get services -n kuma-system
    2. NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    3. kuma-system global-remote-sync LoadBalancer 10.105.9.10 35.226.196.103 5685:30685/TCP 89s
    4. kuma-system kuma-control-plane ClusterIP 10.105.12.133 <none> 5681/TCP,443/TCP,5676/TCP,5677/TCP,5678/TCP,5679/TCP,5682/TCP,5653/UDP 90s

    By default, it’s exposed on port 5685. In this example the value is 35.226.196.103:5685. You pass this as the value of <global-kds-address> when you set up the zone control planes.

Running global control plane in “Universal on Kubernetes” mode means using PostgreSQL as storage instead of Kubernetes. It means that failover / HA / reliability characteristics will change. Please read Kubernetes and PostgreSQL docs for more details.

  1. Set controlPlane.environment=universal and controlPlane.mode=global in the chart (values.yaml).

  2. Define Kubernetes secrets with database sensitive information

    1. apiVersion: v1
    2. kind: Secret
    3. metadata:
    4. name: your-secret-name
    5. type: Opaque
    6. data:
    7. POSTGRES_DB: ...
    8. POSTGRES_HOST_RW: ...
    9. POSTGRES_USER: ...
    10. POSTGRES_PASSWORD: ...
  3. Set controlPlane.secrets with database sensitive information

    1. # ...
    2. secrets:
    3. postgresDb:
    4. Secret: your-secret-name
    5. Key: POSTGRES_DB
    6. Env: KUMA_STORE_POSTGRES_DB_NAME
    7. postgresHost:
    8. Secret: your-secret-name
    9. Key: POSTGRES_HOST_RW
    10. Env: KUMA_STORE_POSTGRES_HOST
    11. postgrestUser:
    12. Secret: your-secret-name
    13. Key: POSTGRES_USER
    14. Env: KUMA_STORE_POSTGRES_USER
    15. postgresPassword:
    16. Secret: your-secret-name
    17. Key: POSTGRES_PASSWORD
    18. Env: KUMA_STORE_POSTGRES_PASSWORD
  4. Optionally set postgres with TLS settings

    1. # Postgres' settings for universal control plane on k8s
    2. postgres:
    3. # -- Postgres port, password should be provided as a secret reference in "controlPlane.secrets"
    4. # with the Env value "KUMA_STORE_POSTGRES_PASSWORD".
    5. # Example:
    6. # controlPlane:
    7. # secrets:
    8. # - Secret: postgres-postgresql
    9. # Key: postgresql-password
    10. # Env: KUMA_STORE_POSTGRES_PASSWORD
    11. port: "5432"
    12. # TLS settings
    13. tls:
    14. # -- Mode of TLS connection. Available values are: "disable", "verifyNone", "verifyCa", "verifyFull"
    15. mode: disable # ENV: KUMA_STORE_POSTGRES_TLS_MODE
    16. # -- Whether to disable SNI the postgres `sslsni` option.
    17. disableSSLSNI: false # ENV: KUMA_STORE_POSTGRES_TLS_DISABLE_SSLSNI
    18. # -- Secret name that contains the ca.crt
    19. caSecretName:
    20. # -- Secret name that contains the client tls.crt, tls.key
    21. secretName:
  5. Run helm install

    1. helm install kuma -f values.yaml --skip-crds --create-namespace --namespace kuma-system kuma/kuma
  6. Find the external IP and port of the global-remote-sync service in the kuma-system namespace:

    1. kubectl get services -n kuma-system
    2. NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    3. kuma-system global-remote-sync LoadBalancer 10.105.9.10 35.226.196.103 5685:30685/TCP 89s
    4. kuma-system kuma-control-plane ClusterIP 10.105.12.133 <none> 5681/TCP,443/TCP,5676/TCP,5677/TCP,5678/TCP,5679/TCP,5682/TCP,5653/UDP 90s

    In this example the value is 35.226.196.103:5685. You pass this as the value of <global-kds-address> when you set up the zone control planes.

  7. Set up the global control plane, and add the global environment variable:

    1. KUMA_MODE=global kuma-cp run

Set up the zone control planes

You need the following values to pass to each zone control plane setup:

  • zone – the zone name. An arbitrary string. This value registers the zone control plane with the global control plane.
  • kds-global-address – the external IP and port of the global control plane.

  • Kubernetes

  • Helm
  • Universal

Without zone egress:

  1. On each zone control plane, run:

    1. kumactl install control-plane \
    2. --mode=zone \
    3. --zone=<zone name> \
    4. --ingress-enabled \
    5. --kds-global-address grpcs://<global-kds-address>:5685 | kubectl apply -f -

    where zone is the same value for all zone control planes in the same zone.

With zone egress:

  1. On each zone control plane, run:

    1. kumactl install control-plane \
    2. --mode=zone \
    3. --zone=<zone-name> \
    4. --ingress-enabled \
    5. --egress-enabled \
    6. --kds-global-address grpcs://<global-kds-address>:5685 | kubectl apply -f -

    where zone is the same value for all zone control planes in the same zone.

Without zone egress:

  1. On each zone control plane, run:

    1. helm install kuma \
    2. --create-namespace \
    3. --namespace kuma-system \
    4. --set controlPlane.mode=zone \
    5. --set controlPlane.zone=<zone-name> \
    6. --set ingress.enabled=true \
    7. --set controlPlane.kdsGlobalAddress=grpcs://<global-kds-address>:5685 kuma/kuma

    where controlPlane.zone is the same value for all zone control planes in the same zone.

With zone egress:

  1. On each zone control plane, run:

    1. helm install kuma \
    2. --create-namespace \
    3. --namespace kuma-system \
    4. --set controlPlane.mode=zone \
    5. --set controlPlane.zone=<zone-name> \
    6. --set ingress.enabled=true \
    7. --set egress.enabled=true \
    8. --set controlPlane.kdsGlobalAddress=grpcs://<global-kds-address>:5685 kuma/kuma

    where controlPlane.zone is the same value for all zone control planes in the same zone.

  2. On each zone control plane, run:

    1. KUMA_MODE=zone \
    2. KUMA_MULTIZONE_ZONE_NAME=<zone-name> \
    3. KUMA_MULTIZONE_ZONE_GLOBAL_ADDRESS=grpcs://<global-kds-address>:5685 \
    4. ./kuma-cp run

    where KUMA_MULTIZONE_ZONE_NAME is the same value for all zone control planes in the same zone.

  3. Generate the zone proxy token:

    To register the zone ingress and zone egress with the zone control plane, we need to generate a token first

    1. kumactl generate zone-token --zone=<zone-name> --scope egress --scope ingress > /tmp/zone-token

    You can also generate the token with the REST API . Alternatively, you could generate separate tokens for ingress and egress.

  4. Create an ingress data plane proxy configuration to allow kuma-cp services to be exposed for cross-zone communication:

    1. echo "type: ZoneIngress
    2. name: ingress-01
    3. networking:
    4. address: 127.0.0.1 # address that is routable within the zone
    5. port: 10000
    6. advertisedAddress: 10.0.0.1 # an address which other zones can use to consume this zone-ingress
    7. advertisedPort: 10000 # a port which other zones can use to consume this zone-ingress" > ingress-dp.yaml
  5. Apply the ingress config, passing the IP address of the zone control plane to cp-address:

    1. kuma-dp run \
    2. --proxy-type=ingress \
    3. --cp-address=https://<kuma-cp-address>:5678 \
    4. --dataplane-token-file=/tmp/zone-token \
    5. --dataplane-file=ingress-dp.yaml

    If zone-ingress is running on a different machine than zone-cp you need to copy CA cert file from zone-cp (located in ~/.kuma/kuma-cp.crt) to somewhere accessible by zone-ingress (e.g. /tmp/kuma-cp.crt). Modify the above command and provide the certificate path in --ca-cert-file argument.

    1. kuma-dp run \
    2. --proxy-type=ingress \
    3. --cp-address=https://<kuma-cp-address>:5678 \
    4. --dataplane-token-file=/tmp/zone-token \
    5. --ca-cert-file=/tmp/kuma-cp.crt \
    6. --dataplane-file=ingress-dp.yaml
  6. Optional: if you want to deploy zone egress

    Create a ZoneEgress data plane proxy configuration to allow kuma-cp services to be configured to proxy traffic to other zones or external services through zone egress:

    1. echo "type: ZoneEgress
    2. name: zoneegress-01
    3. networking:
    4. address: 127.0.0.1 # address that is routable within the zone
    5. port: 10002" > zoneegress-dataplane.yaml
  7. Apply the egress config, passing the IP address of the zone control plane to cp-address:

    1. kuma-dp run \
    2. --proxy-type=egress \
    3. --cp-address=https://<kuma-cp-address>:5678 \
    4. --dataplane-token-file=/tmp/zone-token \
    5. --dataplane-file=zoneegress-dataplane.yaml

Verify control plane connectivity

You can run kumactl get zones, or check the list of zones in the web UI for the global control plane, to verify zone control plane connections.

When a zone control plane connects to the global control plane, the Zone resource is created automatically in the global control plane.

The Zone Ingress tab of the web UI also lists zone control planes that you deployed with zone ingress.

Ensure mTLS is enabled on the multi-zone meshes

MTLS is mandatory to enable cross-zone service communication. mTLS can be configured in your mesh configuration as indicated in the mTLS section. This is required because Kuma uses the Server Name Indication field, part of the TLS protocol, as a way to pass routing information cross zones.

Cross-zone communication details

For this example we will assume we have a service running in a Kubernetes zone exposing a kuma.io/service with value echo-server_echo-example_svc_1010. The following examples are running in the remote zone trying to access the previously mentioned service.

To view the list of service names available, run:

  1. kubectl get serviceinsight all-services-default -oyaml
  2. apiVersion: kuma.io/v1alpha1
  3. kind: ServiceInsight
  4. mesh: default
  5. metadata:
  6. name: all-services-default
  7. spec:
  8. services:
  9. echo-server_echo-example_svc_1010:
  10. dataplanes:
  11. online: 1
  12. total: 1
  13. issuedBackends:
  14. ca-1: 1
  15. status: online

The following are some examples of different ways to address echo-server in the echo-example Namespace in a multi-zone mesh.

To send a request in the same zone, you can rely on Kubernetes DNS and use the usual Kubernetes hostnames and ports:

  1. curl http://echo-server:1010

Requests are distributed round robin between zones. You can use locality-aware load balancing to keep requests in the same zone.

To send a request to any zone, you can use the generated kuma.io/service and Kuma DNS:

  1. curl http://echo-server_echo-example_svc_1010.mesh:80

Kuma DNS also supports RFC 1123 compatible names, where underscores are replaced with dots:

  1. curl http://echo-server.echo-example.svc.1010.mesh:80
  1. kumactl inspect services
  2. SERVICE STATUS DATAPLANES
  3. echo-service_echo-example_svc_1010 Online 1/1

To consume the service in a Universal deployment without transparent proxy add the following outbound to your dataplane configuration :

  1. outbound:
  2. - port: 20012
  3. tags:
  4. kuma.io/service: echo-server_echo-example_svc_1010

From the data plane running you will now be able to reach the service using localhost:20012.

Alternatively, if you configure transparent proxy you can just call echo-server_echo-example_svc_1010.mesh without defining an outbound section.

For security reasons it’s not possible to customize the kuma.io/service in Kubernetes.

If you want to have the same service running on both Universal and Kubernetes make sure to align the Universal’s data plane inbound to have the same kuma.io/service as the one in Kubernetes or leverage TrafficRoute.

Delete a zone

To delete a Zone we must first shut down the corresponding Kuma zone control plane instances. As long as the Zone CP is running this will not be possible, and Kuma returns a validation error like:

  1. zone: unable to delete Zone, Zone CP is still connected, please shut it down first

When the Zone CP is fully disconnected and shut down, then the Zone can be deleted. All corresponding resources (like Dataplane and DataplaneInsight) will be deleted automatically as well.

  1. kubectl delete zone zone-1
  1. kumactl delete zone zone-1

Disable a zone

Change the enabled property value to false in the global control plane:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: Zone
  3. metadata:
  4. name: zone-1
  5. spec:
  6. enabled: false
  1. type: Zone
  2. name: zone-1
  3. spec:
  4. enabled: false

With this setting, the global control plane will stop exchanging configuration with this zone. As a result, the zone’s ingress from zone-1 will be deleted from other zone and traffic won’t be routed to it anymore. The zone will show as Offline in the GUI and CLI.