This guide demonstrates how to perform Canary rollouts using the SMI Traffic Split configuration.

Prerequisites

  • Kubernetes cluster running Kubernetes v1.22.9 or greater.
  • Have OSM installed.
  • Have kubectl available to interact with the API server.
  • Have osm CLI available for managing the service mesh.

Demo

In this demo, we will deploy an HTTP application and perform a canary rollout where a new version of the application is deployed to serve a percentage of traffic directed to the service.

To split traffic to multiple service backends, the SMI Traffic Split API will be used. More about the usage of this API can be found in the traffic split guide. For client applications to transparently split traffic to multiple service backends, it is important to note that client applications must direct traffic to the FQDN of the root service referenced in a TrafficSplit resource. In this demo, the curl client will direct traffic to the httpbin root service, initially backed by version v1 of the service, and then perform a canary rollout to direct a percentage of traffic to version v2 of the service.

The following steps demonstrate the canary rollout deployment strategy.

Note: Permissive traffic policy mode is enabled to avoid the need to create explicit access control policies.

  1. Enable permissive mode

    1. osm_namespace=osm-system # Replace osm-system with the namespace where OSM is installed
    2. kubectl patch meshconfig osm-mesh-config -n "$osm_namespace" -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":true}}}' --type=merge
  2. Deploy the curl client into the curl namespace after enrolling its namespace to the mesh.

    1. # Create the curl namespace
    2. kubectl create namespace curl
    3. # Add the namespace to the mesh
    4. osm namespace add curl
    5. # Deploy curl client in the curl namespace
    6. kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/release-v1.2/manifests/samples/curl/curl.yaml -n curl

    Confirm the curl client pod is up and running.

    1. $ kubectl get pods -n curl
    2. NAME READY STATUS RESTARTS AGE
    3. curl-54ccc6954c-9rlvp 2/2 Running 0 20s
  3. Create the root httpbin service that clients will direct traffic to. The service has the selector app: httpbin.

    1. # Create the httpbin namespace
    2. kubectl create namespace httpbin
    3. # Add the namespace to the mesh
    4. osm namespace add httpbin
    5. # Create the httpbin root service and service account
    6. kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/release-v1.2/manifests/samples/canary/httpbin.yaml -n httpbin
  4. Deploy version v1 of the httpbin service. The service httpbin-v1 has the selector app: httpbin, version: v1, and the deployment httpbin-v1 has the labels app: httpbin, version: v1 matching the selector of both the httpbin root service and httpbin-v1 service.

    1. kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/release-v1.2/manifests/samples/canary/httpbin-v1.yaml -n httpbin
  5. Create an SMI TrafficSplit resource that directs all traffic to the httpbin-v1 service.

    1. kubectl apply -f - <<EOF
    2. apiVersion: split.smi-spec.io/v1alpha2
    3. kind: TrafficSplit
    4. metadata:
    5. name: http-split
    6. namespace: httpbin
    7. spec:
    8. service: httpbin.httpbin.svc.cluster.local
    9. backends:
    10. - service: httpbin-v1
    11. weight: 100
    12. EOF
  6. Confirm all traffic directed to the root service FQDN httpbin.httpbin.svc.cluster.local is routed to the httpbin-v1 pod. This can be verified by inspecting the HTTP response headers and confirming that the request succeeds and the pod displayed corresponds to httpbin-v1.

    1. for i in {1..10}; do kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -sI http://httpbin.httpbin:14001/json | egrep 'HTTP|pod'; done
    2. HTTP/1.1 200 OK
    3. pod: httpbin-v1-77c99dccc9-q2gvt
    4. HTTP/1.1 200 OK
    5. pod: httpbin-v1-77c99dccc9-q2gvt
    6. HTTP/1.1 200 OK
    7. pod: httpbin-v1-77c99dccc9-q2gvt
    8. HTTP/1.1 200 OK
    9. pod: httpbin-v1-77c99dccc9-q2gvt
    10. HTTP/1.1 200 OK
    11. pod: httpbin-v1-77c99dccc9-q2gvt
    12. HTTP/1.1 200 OK
    13. pod: httpbin-v1-77c99dccc9-q2gvt
    14. HTTP/1.1 200 OK
    15. pod: httpbin-v1-77c99dccc9-q2gvt
    16. HTTP/1.1 200 OK
    17. pod: httpbin-v1-77c99dccc9-q2gvt
    18. HTTP/1.1 200 OK
    19. pod: httpbin-v1-77c99dccc9-q2gvt
    20. HTTP/1.1 200 OK
    21. pod: httpbin-v1-77c99dccc9-q2gvt

    The above output indicates all 10 requests returned HTTP 200 OK, and were responded by the httpbin-v1 pod.

  7. Prepare the canary rollout by deploying version v2 of the httpbin service. The service httpbin-v2 has the selector app: httpbin, version: v2, and the deployment httpbin-v2 has the labels app: httpbin, version: v2 matching the selector of both the httpbin root service and httpbin-v2 service.

    1. kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/release-v1.2/manifests/samples/canary/httpbin-v2.yaml -n httpbin
  8. Perform the canary rollout by updating the SMI TrafficSplit resource to split traffic directed to the root service FQDN httpbin.httpbin.svc.cluster.local to both the httpbin-v1 and httpbin-v2 services, fronting the v1 and v2 versions of the httpbin service respectively. We will distribute the weight equally to demonstrate traffic splitting.

    1. kubectl apply -f - <<EOF
    2. apiVersion: split.smi-spec.io/v1alpha2
    3. kind: TrafficSplit
    4. metadata:
    5. name: http-split
    6. namespace: httpbin
    7. spec:
    8. service: httpbin.httpbin.svc.cluster.local
    9. backends:
    10. - service: httpbin-v1
    11. weight: 50
    12. - service: httpbin-v2
    13. weight: 50
    14. EOF
  9. Confirm traffic is split proportional to the weights assigned to the backend services. Since we configured a weight of 50 for both v1 and v2, requests should be load balanced to both the versions as seen below.

    1. $ for i in {1..10}; do kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -sI http://httpbin.httpbin:14001/json | egrep 'HTTP|pod'; done
    2. HTTP/1.1 200 OK
    3. pod: httpbin-v2-6b48697db-cdqld
    4. HTTP/1.1 200 OK
    5. pod: httpbin-v1-77c99dccc9-q2gvt
    6. HTTP/1.1 200 OK
    7. pod: httpbin-v1-77c99dccc9-q2gvt
    8. HTTP/1.1 200 OK
    9. pod: httpbin-v1-77c99dccc9-q2gvt
    10. HTTP/1.1 200 OK
    11. pod: httpbin-v2-6b48697db-cdqld
    12. HTTP/1.1 200 OK
    13. pod: httpbin-v2-6b48697db-cdqld
    14. HTTP/1.1 200 OK
    15. pod: httpbin-v1-77c99dccc9-q2gvt
    16. HTTP/1.1 200 OK
    17. pod: httpbin-v2-6b48697db-cdqld
    18. HTTP/1.1 200 OK
    19. pod: httpbin-v2-6b48697db-cdqld
    20. HTTP/1.1 200 OK
    21. pod: httpbin-v1-77c99dccc9-q2gvt

    The above output indicates all 10 requests returned an HTTP 200 OK, and both httpbin-v1 and httpbin-v2 pods responsed to 5 requests each based on the weight assigned to them in the TrafficSplit configuration.