Automated Canary Releases

Linkerd's traffic split feature allows you todynamically shift traffic between services. This can be used to implementlower-risk deployment strategies like blue-green deploys and canaries.

But simply shifting traffic from one version of a service to the next is justthe beginning. We can combine traffic splitting with Linkerd's automaticgolden metrics telemetry and drive traffic decisionsbased on the observed metrics. For example, we can gradually shift traffic froman old deployment to a new one while continually monitoring its success rate. Ifat any point the success rate drops, we can shift traffic back to the originaldeployment and back out of the release. Ideally, our users remain happythroughout, not noticing a thing!

In this tutorial, we'll walk you through how to combine Linkerd withFlagger, a progressive delivery tool that tiesLinkerd's metrics and traffic splitting together in a control loop,allowing for fully-automated, metrics-aware canary deployments.

Prerequisites

  • To use this guide, you'll need to have Linkerd installed on your cluster.Follow the Installing Linkerd Guide if you haven'talready done this.
  • The installation of Flagger depends on kubectl 1.14 or newer.

Install Flagger

While Linkerd will be managing the actual traffic routing, Flagger automatesthe process of creating new Kubernetes resources, watching metrics andincrementally sending users over to the new version. To add Flagger to yourcluster and have it configured to work with Linkerd, run:

  1. kubectl apply -k github.com/weaveworks/flagger/kustomize/linkerd

This command adds:

  • The canaryCRDthat enables configuring how a rollout should occur.
  • RBAC which grants Flagger permissions to modify all the resources that itneeds to, such as deployments and services.
  • A controller configured to interact with the Linkerd control plane.

To watch until everything is up and running, you can use kubectl:

  1. kubectl -n linkerd rollout status deploy/flagger

Set up the demo

This demo consists of two components: a load generator and a deployment. Thedeployment creates a pod that returns some information such as name. You can usethe responses to watch the incremental rollout as Flagger orchestrates it. Aload generator simply makes it easier to execute the rollout as there needs tobe some kind of active traffic to complete the operation. Together, thesecomponents have a topology that looks like:

Topology)Topology

To add these components to your cluster and include them in the Linkerddata plane, run:

  1. kubectl create ns test && \
  2. kubectl apply -f https://run.linkerd.io/flagger.yml

Verify that everything has started up successfully by running:

  1. kubectl -n test rollout status deploy podinfo

Check it out by forwarding the service locally and openinghttp://localhost:9898 locally by running:

  1. kubectl -n test port-forward svc/podinfo 9898

NoteTraffic shifting occurs on the client side of the connection and not theserver side. Any requests coming from outside the mesh will not be shifted andwill always be directed to the primary backend. A service of type LoadBalancerwill exhibit this behavior as the source is not part of the mesh. To shiftexternal traffic, add your ingress controller to the mesh.

Configure the release

Before changing anything, you need to configure how a release should be rolledout on the cluster. The configuration is contained in aCanarydefinition. To apply to your cluster, run:

  1. cat <<EOF | kubectl apply -f -
  2. apiVersion: flagger.app/v1alpha3
  3. kind: Canary
  4. metadata:
  5. name: podinfo
  6. namespace: test
  7. spec:
  8. targetRef:
  9. apiVersion: apps/v1
  10. kind: Deployment
  11. name: podinfo
  12. service:
  13. port: 9898
  14. canaryAnalysis:
  15. interval: 10s
  16. threshold: 5
  17. stepWeight: 10
  18. metrics:
  19. - name: request-success-rate
  20. threshold: 99
  21. interval: 1m
  22. EOF

The Flagger controller is watching these definitions and will create some newresources on your cluster. To watch as this happens, run:

  1. kubectl -n test get ev --watch

A new deployment named podinfo-primary will be created with the same number ofreplicas that podinfo has. Once the new pods are ready, the originaldeployment is scaled down to zero. This provides a deployment that is managed byFlagger as an implementation detail and maintains your original configurationfiles and workflows. Once you see the following line, everything is setup:

  1. 0s Normal Synced canary/podinfo Initialization done! podinfo.test

In addition to a managed deployment, there are also services created toorchestrate routing traffic between the new and old versions of yourapplication. These can be viewed with kubectl -n test get svc and should looklike:

  1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  2. frontend ClusterIP 10.7.251.33 <none> 8080/TCP 96m
  3. podinfo ClusterIP 10.7.252.86 <none> 9898/TCP 96m
  4. podinfo-canary ClusterIP 10.7.245.17 <none> 9898/TCP 23m
  5. podinfo-primary ClusterIP 10.7.249.63 <none> 9898/TCP 23m

At this point, the topology looks a little like:

Initialized)Initialized

NoteThis guide barely touches all the functionality provided by Flagger. Make sureto read the documentation if you're interested incombining canary releases with HPA, working off custom metrics or doing othertypes of releases such as A/B testing.

Start the rollout

As a system, Kubernetes resources have two major sections: the spec and status.When a controller sees a spec, it tries as hard as it can to make the status ofthe current system match the spec. With a deployment, if any of the pod specconfiguration is changed, a controller will kick off a rollout. By default, thedeployment controller will orchestrate a rollingupdate.

In this example, Flagger will notice that a deployment's spec changed and startorchestrating the canary rollout. To kick this process off, you can update theimage to a new version by running:

  1. kubectl -n test set image deployment/podinfo \
  2. podinfod=quay.io/stefanprodan/podinfo:1.7.1

Any kind of modification to the pod's spec such as updating an environmentvariable or annotation would result in the same behavior as updating the image.

On update, the canary deployment (podinfo) will be scaled up. Once ready,Flagger will begin to update the TrafficSplit CRDincrementally. With a configured stepWeight of 10, each increment will increasethe weight of podinfo by 10. For each period, the success rate will beobserved and as long as it is over the threshold of 99%, Flagger will continuethe rollout. To watch this entire process, run:

  1. kubectl -n test get ev --watch

While an update is occurring, the resources and traffic will look like this at ahigh level:

Ongoing)Ongoing

After the update is complete, this picture will go back to looking just like thefigure from the previous section.

NoteYou can toggle the image tag between 1.7.1 and 1.7.0 to start the rolloutagain.

Resource

The canary resource updates with the current status and progress. You can watchby running:

  1. watch kubectl -n test get canary

Behind the scenes, Flagger is splitting traffic between the primary and canarybackends by updating the traffic split resource. To watch how this configurationchanges over the rollout, run:

  1. kubectl -n test get trafficsplit podinfo -o yaml

Each increment will increase the weight of podinfo-canary and decrease theweight of podinfo-primary. Once the rollout is successful, the weight ofpodinfo-primary will be set back to 100 and the underlying canary deployment(podinfo) will be scaled down.

Metrics

As traffic shifts from the primary deployment to the canary one, Linkerdprovides visibility into what is happening to the destination of requests. Themetrics show the backends receiving traffic in real time and measure the successrate, latencies and throughput. From the CLI, you can watch this by running:

  1. watch linkerd -n test stat deploy --from deploy/load

For something a little more visual, you can use the dashboard. Start it byrunning linkerd dashboard and then look at the detail page for the podinfotraffic split.

Dashboard)Dashboard

Browser

To see the landing page served by podinfo, run:

  1. kubectl -n test port-forward svc/frontend 8080

This will make the podinfo landing page available athttp://localhost:8080. Refreshing the page will showtoggling between the new version and a different header color. Alternatively,running curl http://localhost:8080 will return a JSON response that lookssomething like:

  1. {
  2. "hostname": "podinfo-primary-74459c7db8-lbtxf",
  3. "version": "1.7.0",
  4. "revision": "4fc593f42c7cd2e7319c83f6bfd3743c05523883",
  5. "color": "blue",
  6. "message": "greetings from podinfo v1.7.0",
  7. "goos": "linux",
  8. "goarch": "amd64",
  9. "runtime": "go1.11.2",
  10. "num_goroutine": "6",
  11. "num_cpu": "8"
  12. }

This response will slowly change as the rollout continues.

Cleanup

To cleanup, remove the Flagger controller from your cluster and delete thetest namespace by running:

  1. kubectl delete -k github.com/weaveworks/flagger/kustomize/linkerd && \
  2. kubectl delete ns test