Multi-zone deployment

About

Kuma supports running your service mesh in multiple zones. It is even possible to run with a mix of Kubernetes and Universal zones. Your mesh environment can include multiple isolated service meshes (multi-tenancy), and workloads running in different regions, on different clouds, or in different datacenters. A zone can be a Kubernetes cluster, a VPC, or any other deployment you need to include in the same distributed mesh environment.

Multi-zone deployment - 图1

How it works

Kuma manages service connectivity – establishing and maintaining connections across zones in the mesh – with the zone ingress and with a DNS resolver.

The DNS resolver is embedded in each data plane proxy and configured through XDS. It resolves each service address to a virtual IP address for all service-to-service communication.

The global control plane and the zone control planes communicate to synchronize resources such as Kuma policy configurations over Kuma Discovery Service (KDS), which is a protocol based on xDS.

A zone ingress is not an API gateway. Instead, it is specific to internal cross-zone communication within the mesh. API gateways are supported in Kuma gateway mode which can be deployed in addition to zone ingresses.

Components of a multi-zone deployment

A multi-zone deployment includes:

  • The global control plane:
    • Accept connections only from zone control planes.
    • Accept creation and changes to policies that will be applied to the data plane proxies.
    • Send policies down to zone control-planes.
    • Send zone ingresses down to zone control-plane.
    • Keep an inventory of all dataplanes running in all zones (this is only done for observability but is not required for operations).
    • Reject connections from data plane proxies.
  • The zone control planes:
    • Accept connections from data plane proxies started within this zone.
    • Receive policy updates from the global control plane.
    • Send data plane proxies and zone ingress changes to the global control plane.
    • Compute and send configurations using XDS to the local data plane proxies.
    • Update list of services which exist in the zone in the zone ingress.
    • Reject policy changes that do not come from global.
  • The data plane proxies:
    • Connect to the local zone control plane.
    • Receive configurations using XDS from the local zone control plane.
    • Connect to other local data plane proxies.
    • Connect to zone ingresses for sending cross zone traffic.
    • Receive traffic from local data plane proxies and local zone ingresses.
  • The zone ingress:
    • Receive XDS configuration from the local zone control plane.
    • Proxy traffic from other zone data plane proxies to local data plane proxies.

Limitations

It is not possible to route cross-zone traffic on a subset of dataplanes with the same kuma.io/service.

This means that complex Virtual-outbound will not route any traffic across zones.

Usage

To set up a multi-zone deployment we will need to:

  • Set up the global control plane
  • Set up the zone control planes
  • Verify control plane connectivity
  • Set up cross-zone communication between data plane proxies

Set up the global control plane

The global control plane must run on a dedicated cluster, and cannot be assigned to a zone.

The global control plane on Kubernetes must reside on its own Kubernetes cluster, to keep its resources separate from the resources the zone control planes create during synchronization.

  1. Run:

    1. kumactl install control-plane --mode=global | kubectl apply -f -
  2. Find the external IP and port of the global-remote-sync service in the kuma-system namespace:

    1. kubectl get services -n kuma-system
    2. NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    3. kuma-system global-remote-sync LoadBalancer 10.105.9.10 35.226.196.103 5685:30685/TCP 89s
    4. kuma-system kuma-control-plane ClusterIP 10.105.12.133 <none> 5681/TCP,443/TCP,5676/TCP,5677/TCP,5678/TCP,5679/TCP,5682/TCP,5653/UDP 90s

    In this example the value is 35.226.196.103:5685. You pass this as the value of <global-kds-address> when you set up the zone control planes.

  3. Set the controlPlane.mode value to global in the chart (values.yaml), then install. On the command line, run:

    1. helm install kuma --namespace kuma-system --set controlPlane.mode=global kuma/kuma

    Or you can edit the chart and pass the file to the helm install kuma command. To get the default values, run:

    1. helm show values kuma/kuma
  4. Find the external IP and port of the global-remote-sync service in the kuma-system namespace:

    1. kubectl get services -n kuma-system
    2. NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    3. kuma-system global-remote-sync LoadBalancer 10.105.9.10 35.226.196.103 5685:30685/TCP 89s
    4. kuma-system kuma-control-plane ClusterIP 10.105.12.133 <none> 5681/TCP,443/TCP,5676/TCP,5677/TCP,5678/TCP,5679/TCP,5682/TCP,5653/UDP 90s

    By default, it’s exposed on port 5685. In this example the value is 35.226.196.103:5685. You pass this as the value of <global-kds-address> when you set up the zone control planes.

  5. Set up the global control plane, and add the global environment variable:

    1. KUMA_MODE=global kuma-cp run

Set up the zone control planes

You need the following values to pass to each zone control plane setup:

  • zone – the zone name. An arbitrary string. This value registers the zone control plane with the global control plane.
  • kds-global-address – the external IP and port of the global control plane.

  • Kubernetes

  • Helm
  • Universal
  1. On each zone control plane, run:

    1. kumactl install control-plane \
    2. --mode=zone \
    3. --zone=<zone name> \
    4. --ingress-enabled \
    5. --kds-global-address grpcs://`<global-kds-address>` | kubectl apply -f -

    where zone is the same value for all zone control planes in the same zone.

  2. On each zone control plane, run:

    1. helm install kuma \
    2. --namespace kuma-system \
    3. --set controlPlane.mode=zone \
    4. --set controlPlane.zone=<zone-name> \
    5. --set ingress.enabled=true \
    6. --set controlPlane.kdsGlobalAddress=grpcs://<global-kds-address> kuma/kuma

    where controlPlane.zone is the same value for all zone control planes in the same zone.

  3. On each zone control plane, run:

    1. KUMA_MODE=zone \
    2. KUMA_MULTIZONE_REMOTE_ZONE=<zone-name> \
    3. KUMA_MULTIZONE_REMOTE_GLOBAL_ADDRESS=grpcs://<global-kds-address> \
    4. ./kuma-cp run

    where KUMA_MULTIZONE_REMOTE_ZONE is the same value for all zone control planes in the same zone.

  4. Generate the zone ingress token:

    To register the zone ingress with the zone control plane, we need to generate a zone ingress token first

    1. kumactl generate zone-ingress-token --zone=<zone-name> > /tmp/ingress-token

    You can also generate the token with the REST API.

  5. Create an ingress data plane proxy configuration to allow kuma-cp services to be exposed for cross-zone communication:

    1. echo "type: ZoneIngress
    2. name: ingress-01
    3. networking:
    4. address: 127.0.0.1 # address that is routable within the zone
    5. port: 10000
    6. advertisedAddress: 10.0.0.1 # an address which other zones can use to consume this zone-ingress
    7. advertisedPort: 10000 # a port which other zones can use to consume this zone-ingress" > ingress-dp.yaml
  6. Apply the ingress config, passing the IP address of the zone control plane to cp-address:

    1. kuma-dp run \
    2. --proxy-type=ingress \
    3. --cp-address=https://<kuma-cp-address>:5678 \
    4. --dataplane-token-file=/tmp/ingress-token \
    5. --dataplane-file=ingress-dp.yaml

Verify control plane connectivity

You can run kumactl get zones, or check the list of zones in the web UI for the global control plane, to verify zone control plane connections.

When a zone control plane connects to the global control plane, the Zone resource is created automatically in the global control plane.

The Ingress tab of the web UI also lists zone control planes that you deployed with Ingress.

Set up cross-zone communication

Enable mTLS

You must enable mTLS for cross-zone communication.

Kuma uses the Server Name Indication field, part of the TLS protocol, as a way to pass routing information cross zones. Thus, mTLS is mandatory to enable cross-zone service communication.

Ensure Zone Ingress has an external advertised address and port

Cross-zone communication between services is available only if Zone Ingress has an external advertised address and port.

If a service of type NodePort or LoadBalancer is attached to the dataplane, Kuma will automatically retrieve the external address and port.

A service of type LoadBalancer is automatically created when installing Kuma with kumactl install control-plane or helm.

Depending on your load balancer implementation, you might need to wait a few minutes for Kuma to get the address.

You can also set this address and port by using the annotations: kuma.io/ingress-public-address and kuma.io/ingress-public-port

Set the advertisedAddress and advertisedPort field in the ZoneIngress definition

  1. type: ZoneIngress
  2. name: ingress-01
  3. networking:
  4. address: 127.0.0.1 # address that is routable within the zone
  5. port: 10000
  6. advertisedAddress: 10.0.0.1 # an address which other zones can use to consume this zone-ingress
  7. advertisedPort: 10000 # a port which other zones can use to consume this zone-ingress

This address doesn’t need to be public to the internet. It only needs to be reachable from all dataplane proxies in other zones.

Cross-zone communication details

To view the list of service names available for cross-zone communication, run:

  1. kubectl get dataplanes -n echo-example -o yaml | grep kuma.io/service
  2. kuma.io/service: echo-server_echo-example_svc_1010

To consume the example service only within the same Kuma zone, you can run:

  1. <kuma-enabled-pod>$ curl http://echo-server:1010

To consume the example service across all zones in your Kuma deployment (that is, from endpoints ultimately connecting to the same global control plane), you can run either of:

  1. <kuma-enabled-pod>$ curl http://echo-server_echo-example_svc_1010.mesh:80
  2. <kuma-enabled-pod>$ curl http://echo-server.echo-example.svc.1010.mesh:80

And if your HTTP clients take the standard default port 80, you can the port value and run either of:

  1. <kuma-enabled-pod>$ curl http://echo-server_echo-example_svc_1010.mesh
  2. <kuma-enabled-pod>$ curl http://echo-server.echo-example.svc.1010.mesh

Because Kuma on Kubernetes relies on transparent proxy, kuma-dp listens on port 80 for all virtual IPs that are assigned to services in the .mesh DNS zone. The DNS names are rendered RFC compatible by replacing underscores with dots. We can configure more flexible setup of hostnames and ports using Virtual Outbound.

With a hybrid deployment, running in both Kubernetes and Universal mode, the service tag should be the same in both environments (e.g echo-server_echo-example_svc_1010):

  1. type: Dataplane
  2. mesh: default
  3. name: backend-02
  4. networking:
  5. address: 127.0.0.1
  6. inbound:
  7. - port: 2010
  8. servicePort: 1010
  9. tags:
  10. kuma.io/service: echo-server_echo-example_svc_1010

If the service is only meant to be run Universal, kuma.io/service does not have to follow {name}_{namespace}_svc_{port} convention.

To consume a distributed service in a Universal deployment, where the application address is http://localhost:20012:

  1. type: Dataplane
  2. mesh: default
  3. name: web-02
  4. networking:
  5. address: 127.0.0.1
  6. inbound:
  7. - port: 10000
  8. servicePort: 10001
  9. tags:
  10. kuma.io/service: web
  11. outbound:
  12. - port: 20012
  13. tags:
  14. kuma.io/service: echo-server_echo-example_svc_1010

Alternatively, you can just call echo-server_echo-example_svc_1010.mesh without defining outbound section if you configure transparent proxy.

The Kuma DNS service format (e.g. echo-server_kuma-test_svc_1010.mesh) is a composition of Kubernetes Service Name (echo-server), Namespace (kuma-test), a fixed string (svc), the service port (1010). The service is resolvable in the DNS zone .mesh where the Kuma DNS service is hooked.

Delete a zone

To delete a Zone we must first shut down the corresponding Kuma zone control plane instances. As long as the Remote CP is running this will not be possible, and Kuma returns a validation error like:

  1. zone: unable to delete Zone, Remote CP is still connected, please shut it down first

When the Remote CP is fully disconnected and shut down, then the Zone can be deleted. All corresponding resources (like Dataplane and DataplaneInsight) will be deleted automatically as well.

  1. kubectl delete zone zone-1
  1. kumactl delete zone zone-1

Disable a zone

Change the enabled property value to false in the global control-plane:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: Zone
  3. metadata:
  4. name: zone-1
  5. spec:
  6. enabled: false
  1. type: Zone
  2. name: zone-1
  3. spec:
  4. enabled: false

With this setting, the global control plane will stop exchanging configuration with this zone. As a result, the zone’s ingress from zone-1 will be deleted from other zone and traffic won’t be routed to it anymore. The zone will show as Offline in the GUI and CLI.

Failure modes

Global control-plane offline

  • Policy updates will be impossible
  • Change in service list between zones will not propagate:
    • New services will not be discoverable in other zones
    • Services removed from a zone will still appear available in other zones
  • You won’t be able to disable or delete a zone

Note that both local and cross-zone application traffic is not impacted by this failure case. Data plane proxy changes will be propagated within their zones.

Zone control-plane offline

  • New data plane proxies won’t be able to join the mesh.
  • Data-plane proxy configuration will not be updated.
  • Communication between data plane proxies will still work.
  • Cross zone communication will still work.
  • Other zones are unaffected.

You can think of this failure case as “Freezing” the zone mesh configuration. Communication will still work but changes will not be reflected on existing data plane proxies.

Communication between Global and Zone control-plane failing

This can happen with mis-configuration or network connectivity issues between control-planes.

  • Operations inside the zone will happen correctly (dataplane proxies can join and leave and all configuration will be updated and sent correctly).
  • Policy changes will not be propagated to the zone control-plane.
  • Zone ingress and dataplane changes will not be propagated to the global control-plane.
    • The global inventory view of the data plane proxies will be outdated (this only impacts observability).
    • Remote zones will not see new services registered inside this zone
    • Remote zones will not see services no longer running inside this zone
    • Remote zones will not see changes in number of instances of each service running in the local zone.
  • Global control-plane will not send changes to other zone-ingress to the zone
    • Local data plane proxies will not see new services registered in other zones
    • Local data plane proxies will not see services no longer running in other zones
    • Local data plane proxies will not see changes in number of instances of each service running in other zones.

Note that both local and cross-zone application traffic is not impacted by this failure case.

Communication between 2 zones failing

This can happen if there are network connectivity issues between control-plane and zone ingresses or all zone ingresses of a zone are down.

  • Communication and operation within each zone is unaffected
  • Communication across each zone will fail

With the right resiliency setup (Retries, Probes, Locality Aware LoadBalancing, Circuit Breakers) the failing zone can be quickly severed and traffic re-routed to another zone.