TCP Traffic Shifting

This task shows you how to shift TCP traffic from one version of a microservice to another.

A common use case is to migrate TCP traffic gradually from an older version of a microservice to a new one. In Istio, you accomplish this goal by configuring a sequence of routing rules that redirect a percentage of TCP traffic from one destination to another.

In this task, you will send 100% of the TCP traffic to tcp-echo:v1. Then, you will route 20% of the TCP traffic to tcp-echo:v2 using Istio’s weighted routing feature.

Istio includes beta support for the Kubernetes Gateway API and intends to make it the default API for traffic management in the future. The following instructions allow you to choose to use either the Gateway API or the Istio configuration API when configuring traffic management in the mesh. Follow instructions under either the Gateway API or Istio APIs tab, according to your preference.

Note that the Kubernetes Gateway API CRDs do not come installed by default on most Kubernetes clusters, so make sure they are installed before using the Gateway API:

  1. $ kubectl get crd gateways.gateway.networking.k8s.io &> /dev/null || \
  2. { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.8.0" | kubectl apply -f -; }

This document uses experimental features of the Kubernetes Gateway API which require the alpha version of the CRDs. Before proceeding with this task, make sure to:

  1. Install the alpha version of the Gateway API CRDs:

    1. $ kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd/experimental?ref=v0.8.0" | kubectl apply -f -
  2. Configure Istio to read the alpha resources by setting the PILOT_ENABLE_ALPHA_GATEWAY_API environment variable to true when installing Istio:

    1. $ istioctl install --set values.pilot.env.PILOT_ENABLE_ALPHA_GATEWAY_API=true --set profile=minimal -y

Before you begin

Set up the test environment

  1. To get started, create a namespace for testing TCP traffic shifting.

    1. $ kubectl create namespace istio-io-tcp-traffic-shifting
  2. Deploy the sleep sample app to use as a test source for sending requests.

    Zip

    1. $ kubectl apply -f @samples/sleep/sleep.yaml@ -n istio-io-tcp-traffic-shifting
  3. Deploy the v1 and v2 versions of the tcp-echo microservice.

    Zip

    1. $ kubectl apply -f @samples/tcp-echo/tcp-echo-services.yaml@ -n istio-io-tcp-traffic-shifting

Apply weight-based TCP routing

  1. Route all TCP traffic to the v1 version of the tcp-echo microservice.

Zip

  1. $ kubectl apply -f @samples/tcp-echo/tcp-echo-all-v1.yaml@ -n istio-io-tcp-traffic-shifting

Zip

  1. $ kubectl apply -f @samples/tcp-echo/gateway-api/tcp-echo-all-v1.yaml@ -n istio-io-tcp-traffic-shifting
  1. Determine the ingress IP and port:

Follow the instructions in Determining the ingress IP and ports to set the TCP_INGRESS_PORT and INGRESS_HOST environment variables.

Use the following commands to set the SECURE_INGRESS_PORT and INGRESS_HOST environment variables:

  1. $ kubectl wait --for=condition=programmed gtw tcp-echo-gateway -n istio-io-tcp-traffic-shifting
  2. $ export INGRESS_HOST=$(kubectl get gtw tcp-echo-gateway -n istio-io-tcp-traffic-shifting -o jsonpath='{.status.addresses[0].value}')
  3. $ export TCP_INGRESS_PORT=$(kubectl get gtw tcp-echo-gateway -n istio-io-tcp-traffic-shifting -o jsonpath='{.spec.listeners[?(@.name=="tcp-31400")].port}')
  1. Confirm that the tcp-echo service is up and running by sending some TCP traffic.

    1. $ export SLEEP=$(kubectl get pod -l app=sleep -n istio-io-tcp-traffic-shifting -o jsonpath={.items..metadata.name})
    2. $ for i in {1..20}; do \
    3. kubectl exec "$SLEEP" -c sleep -n istio-io-tcp-traffic-shifting -- sh -c "(date; sleep 1) | nc $INGRESS_HOST $TCP_INGRESS_PORT"; \
    4. done
    5. one Mon Nov 12 23:24:57 UTC 2022
    6. one Mon Nov 12 23:25:00 UTC 2022
    7. one Mon Nov 12 23:25:02 UTC 2022
    8. one Mon Nov 12 23:25:05 UTC 2022
    9. one Mon Nov 12 23:25:07 UTC 2022
    10. one Mon Nov 12 23:25:10 UTC 2022
    11. one Mon Nov 12 23:25:12 UTC 2022
    12. one Mon Nov 12 23:25:15 UTC 2022
    13. one Mon Nov 12 23:25:17 UTC 2022
    14. one Mon Nov 12 23:25:19 UTC 2022
    15. ...

    You should notice that all the timestamps have a prefix of one, which means that all traffic was routed to the v1 version of the tcp-echo service.

  2. Transfer 20% of the traffic from tcp-echo:v1 to tcp-echo:v2 with the following command:

Zip

  1. $ kubectl apply -f @samples/tcp-echo/tcp-echo-20-v2.yaml@ -n istio-io-tcp-traffic-shifting

Zip

  1. $ kubectl apply -f @samples/tcp-echo/gateway-api/tcp-echo-20-v2.yaml@ -n istio-io-tcp-traffic-shifting
  1. Wait a few seconds for the new rules to propagate and then confirm that the rule was replaced:
  1. $ kubectl get virtualservice tcp-echo -o yaml -n istio-io-tcp-traffic-shifting
  2. apiVersion: networking.istio.io/v1beta1
  3. kind: VirtualService
  4. ...
  5. spec:
  6. ...
  7. tcp:
  8. - match:
  9. - port: 31400
  10. route:
  11. - destination:
  12. host: tcp-echo
  13. port:
  14. number: 9000
  15. subset: v1
  16. weight: 80
  17. - destination:
  18. host: tcp-echo
  19. port:
  20. number: 9000
  21. subset: v2
  22. weight: 20
  1. $ kubectl get tcproute tcp-echo -o yaml -n istio-io-tcp-traffic-shifting
  2. apiVersion: gateway.networking.k8s.io/v1alpha2
  3. kind: TCPRoute
  4. ...
  5. spec:
  6. parentRefs:
  7. - group: gateway.networking.k8s.io
  8. kind: Gateway
  9. name: tcp-echo-gateway
  10. sectionName: tcp-31400
  11. rules:
  12. - backendRefs:
  13. - group: ""
  14. kind: Service
  15. name: tcp-echo-v1
  16. port: 9000
  17. weight: 80
  18. - group: ""
  19. kind: Service
  20. name: tcp-echo-v2
  21. port: 9000
  22. weight: 20
  23. ...
  1. Send some more TCP traffic to the tcp-echo microservice.

    1. $ export SLEEP=$(kubectl get pod -l app=sleep -n istio-io-tcp-traffic-shifting -o jsonpath={.items..metadata.name})
    2. $ for i in {1..20}; do \
    3. kubectl exec "$SLEEP" -c sleep -n istio-io-tcp-traffic-shifting -- sh -c "(date; sleep 1) | nc $INGRESS_HOST $TCP_INGRESS_PORT"; \
    4. done
    5. one Mon Nov 12 23:38:45 UTC 2022
    6. two Mon Nov 12 23:38:47 UTC 2022
    7. one Mon Nov 12 23:38:50 UTC 2022
    8. one Mon Nov 12 23:38:52 UTC 2022
    9. one Mon Nov 12 23:38:55 UTC 2022
    10. two Mon Nov 12 23:38:57 UTC 2022
    11. one Mon Nov 12 23:39:00 UTC 2022
    12. one Mon Nov 12 23:39:02 UTC 2022
    13. one Mon Nov 12 23:39:05 UTC 2022
    14. one Mon Nov 12 23:39:07 UTC 2022
    15. ...

    You should now notice that about 20% of the timestamps have a prefix of two, which means that 80% of the TCP traffic was routed to the v1 version of the tcp-echo service, while 20% was routed to v2.

Understanding what happened

In this task you partially migrated TCP traffic from an old to new version of the tcp-echo service using Istio’s weighted routing feature. Note that this is very different than doing version migration using the deployment features of container orchestration platforms, which use instance scaling to manage the traffic.

With Istio, you can allow the two versions of the tcp-echo service to scale up and down independently, without affecting the traffic distribution between them.

For more information about version routing with autoscaling, check out the blog article Canary Deployments using Istio.

Cleanup

  1. Remove the routing rules:

Zip

  1. $ kubectl delete -f @samples/tcp-echo/tcp-echo-all-v1.yaml@ -n istio-io-tcp-traffic-shifting

Zip

  1. $ kubectl delete -f @samples/tcp-echo/gateway-api/tcp-echo-all-v1.yaml@ -n istio-io-tcp-traffic-shifting
  1. Remove the sleep sample, tcp-echo application and test namespace:

    ZipZip

    1. $ kubectl delete -f @samples/sleep/sleep.yaml@ -n istio-io-tcp-traffic-shifting
    2. $ kubectl delete -f @samples/tcp-echo/tcp-echo-services.yaml@ -n istio-io-tcp-traffic-shifting
    3. $ kubectl delete namespace istio-io-tcp-traffic-shifting