Locality failover

Follow this guide to configure your mesh for locality failover.

Before proceeding, be sure to complete the steps under before you begin.

In this task, you will use the Sleep pod in region1.zone1 as the source of requests to the HelloWorld service. You will then trigger failures that will cause failover between localities in the following sequence:

Locality failover sequence

Locality failover sequence

Internally, Envoy priorities are used to control failover. These priorities will be assigned as follows for traffic originating from the Sleep pod (in region1 zone1):

PriorityLocalityDetails
0region1.zone1Region, zone, and sub-zone all match.
1NoneSince this task doesn’t use sub-zones, there are no matches for a different sub-zone.
2region1.zone2Different zone within the same region.
3region2.zone3No match, however failover is defined for region1->region2.
4region3.zone4No match and no failover defined for region1->region3.

Configure locality failover

Apply a DestinationRule that configures the following:

  • Outlier detection for the HelloWorld service. This is required in order for failover to function properly. In particular, it configures the sidecar proxies to know when endpoints for a service are unhealthy, eventually triggering a failover to the next locality.

  • Failover policy between regions. This ensures that failover beyond a region boundary will behave predictably.

  • Connection Pool policy that forces each HTTP request to use a new connection. This task utilizes Envoy’s drain function to force a failover to the next locality. Once drained, Envoy will reject new connection requests. Since each request uses a new connection, this results in failover immediately following a drain. This configuration is used for demonstration purposes only.

  1. $ kubectl --context="${CTX_PRIMARY}" apply -n sample -f - <<EOF
  2. apiVersion: networking.istio.io/v1beta1
  3. kind: DestinationRule
  4. metadata:
  5. name: helloworld
  6. spec:
  7. host: helloworld.sample.svc.cluster.local
  8. trafficPolicy:
  9. connectionPool:
  10. http:
  11. maxRequestsPerConnection: 1
  12. loadBalancer:
  13. simple: ROUND_ROBIN
  14. localityLbSetting:
  15. enabled: true
  16. failover:
  17. - from: region1
  18. to: region2
  19. outlierDetection:
  20. consecutive5xxErrors: 1
  21. interval: 1s
  22. baseEjectionTime: 1m
  23. EOF

Verify traffic stays in region1.zone1

Call the HelloWorld service from the Sleep pod:

  1. $ kubectl exec --context="${CTX_R1_Z1}" -n sample -c sleep \
  2. "$(kubectl get pod --context="${CTX_R1_Z1}" -n sample -l \
  3. app=sleep -o jsonpath='{.items[0].metadata.name}')" \
  4. -- curl -sSL helloworld.sample:5000/hello
  5. Hello version: region1.zone1, instance: helloworld-region1.zone1-86f77cd7b-cpxhv

Verify that the version in the response is region1.zone.

Repeat this several times and verify that the response is always the same.

Failover to region1.zone2

Next, trigger a failover to region1.zone2. To do this, you drain the Envoy sidecar proxy for HelloWorld in region1.zone1:

  1. $ kubectl --context="${CTX_R1_Z1}" exec \
  2. "$(kubectl get pod --context="${CTX_R1_Z1}" -n sample -l app=helloworld \
  3. -l version=region1.zone1 -o jsonpath='{.items[0].metadata.name}')" \
  4. -n sample -c istio-proxy -- curl -sSL -X POST 127.0.0.1:15000/drain_listeners

Call the HelloWorld service from the Sleep pod:

  1. $ kubectl exec --context="${CTX_R1_Z1}" -n sample -c sleep \
  2. "$(kubectl get pod --context="${CTX_R1_Z1}" -n sample -l \
  3. app=sleep -o jsonpath='{.items[0].metadata.name}')" \
  4. -- curl -sSL helloworld.sample:5000/hello
  5. Hello version: region1.zone2, instance: helloworld-region1.zone2-86f77cd7b-cpxhv

The first call will fail, which triggers the failover. Repeat the command several more times and verify that the version in the response is always region1.zone2.

Failover to region2.zone3

Now trigger a failover to region2.zone3. As you did previously, configure the HelloWorld in region1.zone2 to fail when called:

  1. $ kubectl --context="${CTX_R1_Z2}" exec \
  2. "$(kubectl get pod --context="${CTX_R1_Z2}" -n sample -l app=helloworld \
  3. -l version=region1.zone2 -o jsonpath='{.items[0].metadata.name}')" \
  4. -n sample -c istio-proxy -- curl -sSL -X POST 127.0.0.1:15000/drain_listeners

Call the HelloWorld service from the Sleep pod:

  1. $ kubectl exec --context="${CTX_R1_Z1}" -n sample -c sleep \
  2. "$(kubectl get pod --context="${CTX_R1_Z1}" -n sample -l \
  3. app=sleep -o jsonpath='{.items[0].metadata.name}')" \
  4. -- curl -sSL helloworld.sample:5000/hello
  5. Hello version: region2.zone3, instance: helloworld-region2.zone3-86f77cd7b-cpxhv

The first call will fail, which triggers the failover. Repeat the command several more times and verify that the version in the response is always region2.zone3.

Failover to region3.zone4

Now trigger a failover to region3.zone4. As you did previously, configure the HelloWorld in region2.zone3 to fail when called:

  1. $ kubectl --context="${CTX_R2_Z3}" exec \
  2. "$(kubectl get pod --context="${CTX_R2_Z3}" -n sample -l app=helloworld \
  3. -l version=region2.zone3 -o jsonpath='{.items[0].metadata.name}')" \
  4. -n sample -c istio-proxy -- curl -sSL -X POST 127.0.0.1:15000/drain_listeners

Call the HelloWorld service from the Sleep pod:

  1. $ kubectl exec --context="${CTX_R1_Z1}" -n sample -c sleep \
  2. "$(kubectl get pod --context="${CTX_R1_Z1}" -n sample -l \
  3. app=sleep -o jsonpath='{.items[0].metadata.name}')" \
  4. -- curl -sSL helloworld.sample:5000/hello
  5. Hello version: region3.zone4, instance: helloworld-region3.zone4-86f77cd7b-cpxhv

The first call will fail, which triggers the failover. Repeat the command several more times and verify that the version in the response is always region3.zone4.

Congratulations! You successfully configured locality failover!

Next steps

Cleanup resources and files from this task.