Getting started with Linkerd SMI extension

Service Mesh Interface is a standard interface for service meshes on Kubernetes. It defines a set of resources that could be used across service meshes that implement it. You can read more about it in the specification

Currently, Linkerd supports SMI’s TrafficSplit specification which can be used to perform traffic splitting across services natively. This means that you can apply the SMI resources without any additional components/configuration but this obviously has some downsides, as Linkerd may not be able to add extra specific configurations specific to it, as SMI is more like a lowest common denominator of service mesh functionality.

To get around these problems, Linkerd can instead have an adaptor that converts SMI specifications into native Linkerd configurations that it can understand and perform the operation. This also removes the extra native coupling with SMI resources with the control-plane, and the adaptor can move independently and have it’s own release cycle. Linkerd SMI is an extension that does just that.

This guide will walk you through installing the SMI extension and configuring a TrafficSplit specification, to perform Traffic Splitting across services.

Prerequisites

  • To use this guide, you’ll need to have Linkerd installed on your cluster. Follow the Installing Linkerd Guide if you haven’t already done this.

Install the Linkerd-SMI extension

CLI

Install the SMI extension CLI binary by running:

  1. curl -sL https://linkerd.github.io/linkerd-smi/install | sh

Alternatively, you can download the CLI directly via the releases page.

The first step is installing the Linkerd-SMI extension onto your cluster. This extension consists of a SMI-Adaptor which converts SMI resources into native Linkerd resources.

To install the Linkerd-SMI extension, run the command:

  1. linkerd smi install | kubectl apply -f -

You can verify that the Linkerd-SMI extension was installed correctly by running:

  1. linkerd smi check

Helm

To install the linkerd-smi Helm chart, run:

  1. helm repo add l5d-smi https://linkerd.github.io/linkerd-smi
  2. helm install l5d-smi/linkerd-smi --generate-name

Install Sample Application

First, let’s install the sample application.

  1. # create a namespace for the sample application
  2. kubectl create namespace trafficsplit-sample
  3. # install the sample application
  4. linkerd inject https://raw.githubusercontent.com/linkerd/linkerd2/main/test/integration/trafficsplit/testdata/application.yaml | kubectl -n trafficsplit-sample apply -f -

This installs a simple client, and two server deployments. One of the server deployments i.e faling-svc always returns a 500 error, and the other one i.e backend-svc always returns a 200.

  1. kubectl get deployments -n trafficsplit-sample
  2. NAME READY UP-TO-DATE AVAILABLE AGE
  3. backend 1/1 1 1 2m29s
  4. failing 1/1 1 1 2m29s
  5. slow-cooker 1/1 1 1 2m29s

By default, the client will hit the backend-svcservice. This is evident by the edges sub command.

  1. linkerd viz edges deploy -n trafficsplit-sample
  2. SRC DST SRC_NS DST_NS SECURED
  3. prometheus backend linkerd-viz trafficsplit-sample
  4. prometheus failing linkerd-viz trafficsplit-sample
  5. prometheus slow-cooker linkerd-viz trafficsplit-sample
  6. slow-cooker backend trafficsplit-sample trafficsplit-sample

Configuring a TrafficSplit

Now, Let’s apply a TrafficSplit resource to perform Traffic Splitting on the backend-svc to distribute load between it and the failing-svc.

  1. cat <<EOF | kubectl apply -f -
  2. apiVersion: split.smi-spec.io/v1alpha2
  3. kind: TrafficSplit
  4. metadata:
  5. name: backend-split
  6. namespace: trafficsplit-sample
  7. spec:
  8. service: backend-svc
  9. backends:
  10. - service: backend-svc
  11. weight: 500
  12. - service: failing-svc
  13. weight: 500
  14. EOF

Because the smi-adaptor watches for TrafficSplit resources, it will automatically create a respective ServiceProfile resource to perform the same. This can be verified by retrieving the ServiceProfile resource.

  1. kubectl describe serviceprofile -n trafficsplit-sample
  2. Name: backend-svc.trafficsplit-sample.svc.cluster.local
  3. Namespace: trafficsplit-sample
  4. Labels: <none>
  5. Annotations: <none>
  6. API Version: linkerd.io/v1alpha2
  7. Kind: ServiceProfile
  8. Metadata:
  9. Creation Timestamp: 2021-08-02T12:42:52Z
  10. Generation: 1
  11. Managed Fields:
  12. API Version: linkerd.io/v1alpha2
  13. Fields Type: FieldsV1
  14. fieldsV1:
  15. f:spec:
  16. .:
  17. f:dstOverrides:
  18. Manager: smi-adaptor
  19. Operation: Update
  20. Time: 2021-08-02T12:42:52Z
  21. Resource Version: 3542
  22. UID: cbcdb74f-07e0-42f0-a7a8-9bbcf5e0e54e
  23. Spec:
  24. Dst Overrides:
  25. Authority: backend-svc.trafficsplit-sample.svc.cluster.local
  26. Weight: 500
  27. Authority: failing-svc.trafficsplit-sample.svc.cluster.local
  28. Weight: 500
  29. Events: <none>

As we can see, A relevant ServiceProfile with DstOverrides has been created to perform the TrafficSplit.

The Traffic Splitting can be verified by running the edges command.

  1. linkerd viz edges deploy -n trafficsplit-sample
  2. SRC DST SRC_NS DST_NS SECURED
  3. prometheus backend linkerd-viz trafficsplit-sample
  4. prometheus failing linkerd-viz trafficsplit-sample
  5. prometheus slow-cooker linkerd-viz trafficsplit-sample
  6. slow-cooker backend trafficsplit-sample trafficsplit-sample
  7. slow-cooker failing trafficsplit-sample trafficsplit-sample

This can also be verified by running stat sub command on the TrafficSplit resource.

  1. linkerd viz stat ts/backend-split -n traffic-sample
  2. NAME APEX LEAF WEIGHT SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99
  3. backend-split backend-svc backend-svc 500 100.00% 0.5rps 1ms 1ms 1ms
  4. backend-split backend-svc failing-svc 500 0.00% 0.5rps 1ms 1ms 1ms

This can also be verified by checking the smi-adaptor logs.

  1. kubectl -n linkerd-smi logs deploy/smi-adaptor smi-adaptor
  2. time="2021-08-04T11:04:35Z" level=info msg="Using cluster domain: cluster.local"
  3. time="2021-08-04T11:04:35Z" level=info msg="Starting SMI Controller"
  4. time="2021-08-04T11:04:35Z" level=info msg="Waiting for informer caches to sync"
  5. time="2021-08-04T11:04:35Z" level=info msg="starting admin server on :9995"
  6. time="2021-08-04T11:04:35Z" level=info msg="Starting workers"
  7. time="2021-08-04T11:04:35Z" level=info msg="Started workers"
  8. time="2021-08-04T11:05:17Z" level=info msg="created serviceprofile/backend-svc.trafficsplit-sample.svc.cluster.local for trafficsplit/backend-split"
  9. time="2021-08-04T11:05:17Z" level=info msg="Successfully synced 'trafficsplit-sample/backend-split'"

Cleanup

Delete the trafficsplit-sample resource by running

  1. kubectl delete namespace/trafficsplit-sample

Conclusion

Though, Linkerd currently supports reading TrafficSplit resources directly ServiceProfiles would always take a precedence over TrafficSplit resources. The support for TrafficSplit resource will be removed in a further release at which the linkerd-smi extension would be necessary to use SMI resources with Linkerd.