Circuit Breaking

This task shows you how to configure circuit breaking for connections, requests, and outlier detection.

Circuit breaking is an important pattern for creating resilient microservice applications. Circuit breaking allows you to write applications that limit the impact of failures, latency spikes, and other undesirable effects of network peculiarities.

In this task, you will configure circuit breaking rules and then test the configuration by intentionally “tripping” the circuit breaker.

Before you begin

  • Setup Istio by following the instructions in the Installation guide.

  • Start the httpbin sample.

    If you have enabled automatic sidecar injection, deploy the httpbin service:

    Zip

    1. $ kubectl apply -f @samples/httpbin/httpbin.yaml@

    Otherwise, you have to manually inject the sidecar before deploying the httpbin application:

    Zip

    1. $ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@)

The httpbin application serves as the backend service for this task.

Configuring the circuit breaker

  1. Create a destination rule to apply circuit breaking settings when calling the httpbin service:

    If you installed/configured Istio with mutual TLS authentication enabled, you must add a TLS traffic policy mode: ISTIO_MUTUAL to the DestinationRule before applying it. Otherwise requests will generate 503 errors as described here.

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: networking.istio.io/v1alpha3
    3. kind: DestinationRule
    4. metadata:
    5. name: httpbin
    6. spec:
    7. host: httpbin
    8. trafficPolicy:
    9. connectionPool:
    10. tcp:
    11. maxConnections: 1
    12. http:
    13. http1MaxPendingRequests: 1
    14. maxRequestsPerConnection: 1
    15. outlierDetection:
    16. consecutiveErrors: 1
    17. interval: 1s
    18. baseEjectionTime: 3m
    19. maxEjectionPercent: 100
    20. EOF
  2. Verify the destination rule was created correctly:

    1. $ kubectl get destinationrule httpbin -o yaml
    2. apiVersion: networking.istio.io/v1beta1
    3. kind: DestinationRule
    4. ...
    5. spec:
    6. host: httpbin
    7. trafficPolicy:
    8. connectionPool:
    9. http:
    10. http1MaxPendingRequests: 1
    11. maxRequestsPerConnection: 1
    12. tcp:
    13. maxConnections: 1
    14. outlierDetection:
    15. baseEjectionTime: 3m
    16. consecutiveErrors: 1
    17. interval: 1s
    18. maxEjectionPercent: 100

Adding a client

Create a client to send traffic to the httpbin service. The client is a simple load-testing client called fortio. Fortio lets you control the number of connections, concurrency, and delays for outgoing HTTP calls. You will use this client to “trip” the circuit breaker policies you set in the DestinationRule.

  1. Inject the client with the Istio sidecar proxy so network interactions are governed by Istio.

    If you have enabled automatic sidecar injection, deploy the fortio service:

    Zip

    1. $ kubectl apply -f @samples/httpbin/sample-client/fortio-deploy.yaml@

    Otherwise, you have to manually inject the sidecar before deploying the fortio application:

    Zip

    1. $ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/sample-client/fortio-deploy.yaml@)
  2. Log in to the client pod and use the fortio tool to call httpbin. Pass in curl to indicate that you just want to make one call:

    1. $ export FORTIO_POD=$(kubectl get pods -lapp=fortio -o 'jsonpath={.items[0].metadata.name}')
    2. $ kubectl exec "$FORTIO_POD" -c fortio -- /usr/bin/fortio curl -quiet http://httpbin:8000/get
    3. HTTP/1.1 200 OK
    4. server: envoy
    5. date: Tue, 25 Feb 2020 20:25:52 GMT
    6. content-type: application/json
    7. content-length: 586
    8. access-control-allow-origin: *
    9. access-control-allow-credentials: true
    10. x-envoy-upstream-service-time: 36
    11. {
    12. "args": {},
    13. "headers": {
    14. "Content-Length": "0",
    15. "Host": "httpbin:8000",
    16. "User-Agent": "fortio.org/fortio-1.3.1",
    17. "X-B3-Parentspanid": "8fc453fb1dec2c22",
    18. "X-B3-Sampled": "1",
    19. "X-B3-Spanid": "071d7f06bc94943c",
    20. "X-B3-Traceid": "86a929a0e76cda378fc453fb1dec2c22",
    21. "X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/httpbin;Hash=68bbaedefe01ef4cb99e17358ff63e92d04a4ce831a35ab9a31d3c8e06adb038;Subject=\"\";URI=spiffe://cluster.local/ns/default/sa/default"
    22. },
    23. "origin": "127.0.0.1",
    24. "url": "http://httpbin:8000/get"
    25. }

You can see the request succeeded! Now, it’s time to break something.

Tripping the circuit breaker

In the DestinationRule settings, you specified maxConnections: 1 and http1MaxPendingRequests: 1. These rules indicate that if you exceed more than one connection and request concurrently, you should see some failures when the istio-proxy opens the circuit for further requests and connections.

  1. Call the service with two concurrent connections (-c 2) and send 20 requests (-n 20):

    1. $ kubectl exec "$FORTIO_POD" -c fortio -- /usr/bin/fortio load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get
    2. 20:33:46 I logger.go:97> Log level is now 3 Warning (was 2 Info)
    3. Fortio 1.3.1 running at 0 queries per second, 6->6 procs, for 20 calls: http://httpbin:8000/get
    4. Starting at max qps with 2 thread(s) [gomax 6] for exactly 20 calls (10 per thread + 0)
    5. 20:33:46 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    6. 20:33:47 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    7. 20:33:47 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    8. Ended after 59.8524ms : 20 calls. qps=334.16
    9. Aggregated Function Time : count 20 avg 0.0056869 +/- 0.003869 min 0.000499 max 0.0144329 sum 0.113738
    10. # range, mid point, percentile, count
    11. >= 0.000499 <= 0.001 , 0.0007495 , 10.00, 2
    12. > 0.001 <= 0.002 , 0.0015 , 15.00, 1
    13. > 0.003 <= 0.004 , 0.0035 , 45.00, 6
    14. > 0.004 <= 0.005 , 0.0045 , 55.00, 2
    15. > 0.005 <= 0.006 , 0.0055 , 60.00, 1
    16. > 0.006 <= 0.007 , 0.0065 , 70.00, 2
    17. > 0.007 <= 0.008 , 0.0075 , 80.00, 2
    18. > 0.008 <= 0.009 , 0.0085 , 85.00, 1
    19. > 0.011 <= 0.012 , 0.0115 , 90.00, 1
    20. > 0.012 <= 0.014 , 0.013 , 95.00, 1
    21. > 0.014 <= 0.0144329 , 0.0142165 , 100.00, 1
    22. # target 50% 0.0045
    23. # target 75% 0.0075
    24. # target 90% 0.012
    25. # target 99% 0.0143463
    26. # target 99.9% 0.0144242
    27. Sockets used: 4 (for perfect keepalive, would be 2)
    28. Code 200 : 17 (85.0 %)
    29. Code 503 : 3 (15.0 %)
    30. Response Header Sizes : count 20 avg 195.65 +/- 82.19 min 0 max 231 sum 3913
    31. Response Body/Total Sizes : count 20 avg 729.9 +/- 205.4 min 241 max 817 sum 14598
    32. All done 20 calls (plus 0 warmup) 5.687 ms avg, 334.2 qps

    It’s interesting to see that almost all requests made it through! The istio-proxy does allow for some leeway.

    1. Code 200 : 17 (85.0 %)
    2. Code 503 : 3 (15.0 %)
  2. Bring the number of concurrent connections up to 3:

    1. $ kubectl exec "$FORTIO_POD" -c fortio -- /usr/bin/fortio load -c 3 -qps 0 -n 30 -loglevel Warning http://httpbin:8000/get
    2. 20:32:30 I logger.go:97> Log level is now 3 Warning (was 2 Info)
    3. Fortio 1.3.1 running at 0 queries per second, 6->6 procs, for 30 calls: http://httpbin:8000/get
    4. Starting at max qps with 3 thread(s) [gomax 6] for exactly 30 calls (10 per thread + 0)
    5. 20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    6. 20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    7. 20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    8. 20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    9. 20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    10. 20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    11. 20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    12. 20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    13. 20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    14. 20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    15. 20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    16. 20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    17. 20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    18. 20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    19. 20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    20. 20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    21. 20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    22. 20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    23. 20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    24. Ended after 51.9946ms : 30 calls. qps=576.98
    25. Aggregated Function Time : count 30 avg 0.0040001633 +/- 0.003447 min 0.0004298 max 0.015943 sum 0.1200049
    26. # range, mid point, percentile, count
    27. >= 0.0004298 <= 0.001 , 0.0007149 , 16.67, 5
    28. > 0.001 <= 0.002 , 0.0015 , 36.67, 6
    29. > 0.002 <= 0.003 , 0.0025 , 50.00, 4
    30. > 0.003 <= 0.004 , 0.0035 , 60.00, 3
    31. > 0.004 <= 0.005 , 0.0045 , 66.67, 2
    32. > 0.005 <= 0.006 , 0.0055 , 76.67, 3
    33. > 0.006 <= 0.007 , 0.0065 , 83.33, 2
    34. > 0.007 <= 0.008 , 0.0075 , 86.67, 1
    35. > 0.008 <= 0.009 , 0.0085 , 90.00, 1
    36. > 0.009 <= 0.01 , 0.0095 , 96.67, 2
    37. > 0.014 <= 0.015943 , 0.0149715 , 100.00, 1
    38. # target 50% 0.003
    39. # target 75% 0.00583333
    40. # target 90% 0.009
    41. # target 99% 0.0153601
    42. # target 99.9% 0.0158847
    43. Sockets used: 20 (for perfect keepalive, would be 3)
    44. Code 200 : 11 (36.7 %)
    45. Code 503 : 19 (63.3 %)
    46. Response Header Sizes : count 30 avg 84.366667 +/- 110.9 min 0 max 231 sum 2531
    47. Response Body/Total Sizes : count 30 avg 451.86667 +/- 277.1 min 241 max 817 sum 13556
    48. All done 30 calls (plus 0 warmup) 4.000 ms avg, 577.0 qps

    Now you start to see the expected circuit breaking behavior. Only 36.7% of the requests succeeded and the rest were trapped by circuit breaking:

    1. Code 200 : 11 (36.7 %)
    2. Code 503 : 19 (63.3 %)
  3. Query the istio-proxy stats to see more:

    1. $ kubectl exec "$FORTIO_POD" -c istio-proxy -- pilot-agent request GET stats | grep httpbin | grep pending
    2. cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.default.rq_pending_open: 0
    3. cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.high.rq_pending_open: 0
    4. cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_active: 0
    5. cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_failure_eject: 0
    6. cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_overflow: 21
    7. cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_total: 29

    You can see 21 for the upstream_rq_pending_overflow value which means 21 calls so far have been flagged for circuit breaking.

Cleaning up

  1. Remove the rules:

    1. $ kubectl delete destinationrule httpbin
  2. Shutdown the httpbin service and client:

    1. $ kubectl delete deploy httpbin fortio-deploy
    2. $ kubectl delete svc httpbin fortio

See also

Direct encrypted traffic from IBM Cloud Kubernetes Service Ingress to Istio Ingress Gateway

Configure the IBM Cloud Kubernetes Service Application Load Balancer to direct traffic to the Istio Ingress gateway with mutual TLS.

Multicluster Istio configuration and service discovery using Admiral

Automating Istio configuration for Istio deployments (clusters) that work as a single mesh.

Istio as a Proxy for External Services

Configure Istio ingress gateway to act as a proxy for external services.

Multi-Mesh Deployments for Isolation and Boundary Protection

Deploy environments that require isolation into separate meshes and enable inter-mesh communication by mesh federation.

Secure Control of Egress Traffic in Istio, part 3

Comparison of alternative solutions to control egress traffic including performance considerations.

Secure Control of Egress Traffic in Istio, part 2

Use Istio Egress Traffic Control to prevent attacks involving egress traffic.