Architecture

At a high level, Linkerd consists of a control plane and a data plane.

The control plane is a set of services that run in a dedicatednamespace. These services accomplish various things—aggregating telemetrydata, providing a user-facing API, providing control data to the data planeproxies, etc. Together, they drive the behavior of the data plane.

The data plane consists of transparent proxies that are run nextto each service instance. These proxies automatically handle all traffic to andfrom the service. Because they’re transparent, these proxies act as highlyinstrumented out-of-process network stacks, sending telemetry to, and receivingcontrol signals from, the control plane.

Architecture)

Architecture

Control Plane

The Linkerd control plane is a set of services that run in a dedicatedKubernetes namespace (linkerd by default). These services accomplish variousthings—aggregating telemetry data, providing a user-facing API, providingcontrol data to the data plane proxies, etc. Together, they drive the behaviorof the data plane. To install the control plane on your own cluster, follow the instructions.

The control plane is made up of:

Controller

The controller deployment consists of the public-api container that provides anAPI for the CLI and dashboard to interface with.

Destination

Each proxy in the data plane uses this component to lookup where tosend requests. The destination deployment is also used to fetch service profileinformation used for per-route metrics, retries and timeouts.

Identity

This component provides a CertificateAuthority that acceptsCSRs from proxiesand returns certificates signed with the correct identity. These certificatesare fetched by the proxy on start and must be issued before the proxy becomesready. They are subsequently used for any connection between Linkerd proxies toimplement mTLS.

Proxy Injector

The injector is an admission controller, which receivesa webhook request every time a pod is created. This injector inspects resourcesfor a Linkerd-specific annotation (linkerd.io/inject: enabled). When thatannotation exists, the injector mutates the pod’s specification and adds both aninitContainer as well as a sidecar containing the proxy itself.

Service Profile Validator

The validator is also an admission controller, whichvalidates new service profiles before they aresaved.

Tap

The tap deployment receives requests from the CLI and dashboard to watchrequests and responses in real time. It establishes stream to watch theserequests and responses in specific proxies associated with the requestedapplications.

Web

The web deployment provides the Linkerd dashboard. This does not require runninglinkerd dashboard and can be exposed toothers.

Heartbeat

This CronJob runs once a day and records some analytics that help with thedevelopment of Linkerd. It is optional and can be disabled.

Grafana

Linkerd comes with many dashboards out of the box. The Grafana component is usedto render and display these dashboards. You can reach these dashboards via linksin the Linkerd dashboard itself. It is possible to see high level metrics and digdown into the details for your workloads as well as Linkerd itself.

The dashboards that are provided out of the box include:

Top Line Metrics)

Top Line Metrics

Deployment Detail)

Deployment Detail

Pod Detail)

Pod Detail

Linkerd Health)

Linkerd Health

Prometheus

Prometheus is a cloud native monitoring solution that is used to collect andstore all of the Linkerd metrics. It is installed as part of the control planeand provides the data used by the CLI, dashboard and Grafana.

The proxy exposes a /metrics endpoint for Prometheus to scrape on port 4191.This is scraped every 10 seconds. These metrics are then available to all theother Linkerd components, such as the CLI and dashboard.

Metrics Collection)

Metrics Collection

Data Plane

The Linkerd data plane is comprised of lightweight proxies, which are deployedas sidecar containers alongside each instance of your service code. In order to“add” a service to the Linkerd service mesh, the pods for that service must beredeployed to include a data plane proxy in each pod. The proxy injectoraccomplishes this by watching for a specific annotation that can either be addedwith linkerd inject or by hand to the pod’s spec. You can add yourservice to the data plane with a single CLIcommand.

These proxies transparently intercept communication to and from each pod byutilizing iptables rules that are automatically configured bylinkerd-init, and add features such as instrumentation andencryption (TLS), as well as allowing and denying requests according to therelevant policy.

These proxies are not designed to be configured by hand. Rather, their behavioris driven by the control plane.

Proxy

An ultralight transparent proxy written in Rust,the proxy is installed into each pod of a service and becomes part of the dataplane. It receives all incoming traffic for a pod and intercepts all outgoingtraffic via an initContainer that configures iptables to forward thetraffic correctly. Because it is a sidecar and intercepts all the incoming andoutgoing traffic for a service, there are no code changes required and it caneven be added to a running service.

The proxy’s features include:

  • Transparent, zero-config proxying for HTTP, HTTP/2, and arbitrary TCPprotocols.

  • Automatic Prometheus metrics export for HTTP and TCP traffic.

  • Transparent, zero-config WebSocket proxying.

  • Automatic, latency-aware, layer-7 load balancing.

  • Automatic layer-4 load balancing for non-HTTP traffic.

  • Automatic TLS.

  • An on-demand diagnostic tap API.

The proxy supports service discovery via DNS and thedestination gRPC API.

Linkerd Init

To make the proxy truly transparent, traffic needs to be automaticaly routedthrough it. The linkerd-init container is added as a Kubernetesinit containerthat runs before any other containers are started. This executes a smallprogram which executesiptables and configures the flow of traffic.

There are two main rules that iptables uses:

  • Any traffic being sent to the pod’s external IP address (10.0.0.1 for example)is forwarded to a specific port on the proxy (4143). By settingSO_ORIGINAL_DST on the socket, the proxy is able to forward the traffic to theoriginal destination port that your application is listening on.

  • Any traffic originating from within the pod and being sent to an external IPaddress (not 127.0.0.1) is forwarded to a specific port on the proxy (4140).Because SO_ORIGINAL_DST was set on the socket, the proxy is able to forwardthe traffic to the original recipient (unless there is a reason to send itelsewhere). This does not result in a traffic loop because the iptablesrules explicitly skip the proxy’s UID.

NoteBy default, most ports are forwarded through the proxy. This is not alwaysdesirable and it is possible to have specific ports skip the proxy entirely forboth incoming and outgoing traffic. See the protocoldetection documentation for an explanation ofwhat’s happening here.

CLI

The Linkerd CLI is run locally on your machine and is used to interact with thecontrol and data planes. It can be used to view statistics, debug productionissues in real time and install/upgrade the control and data planes.

Dashboard

The Linkerd dashboard provides a high level view of what is happening with yourservices in real time. It can be used to view the “golden” metrics (successrate, requests/second and latency), visualize service dependencies andunderstand the health of specific service routes. One way to pull it up is byrunning linkerd dashboard from the command line.

Top Line Metrics)

Top Line Metrics

NoteThe dashboard is served by linkerd-web and does not require running linkerddashboard. It can be exposed to others.