Ingress traffic

As of Linkerd version 2.9, there are two ways in which the Linkerd proxy can be run with your Ingress Controller.

Default Mode

When the ingress controller is injected with the linkerd.io/inject: enabled annotation, the Linkerd proxy will honor load balancing decisions made by the ingress controller instead of applying its own EWMA load balancing. This also means that the Linkerd proxy will not use Service Profiles for this traffic and therefore will not expose per-route metrics or do traffic splitting.

If your Ingress controller is injected with no extra configuration specific to ingress, the Linkerd proxy runs in the default mode.

Proxy Ingress Mode

If you want Linkerd functionality like Service Profiles, Traffic Splits, etc, there is additional configuration required to make the Ingress controller’s Linkerd proxy run in ingress mode. This causes Linkerd to route requests based on their :authority, Host, or l5d-dst-override headers instead of their original destination which allows Linkerd to perform its own load balancing and use Service Profiles to expose per-route metrics and enable traffic splitting.

The Ingress controller deployment’s proxy can be made to run in ingress mode by adding the following annotation i.e linkerd.io/inject: ingress in the Ingress Controller’s Pod Spec.

The same can be done by using the --ingress flag in the inject command.

  1. kubectl get deployment <ingress-controller> -n <ingress-namespace> -o yaml | linkerd inject --ingress - | kubectl apply -f -

This can be verified by checking if the Ingress controller’s pod has the relevant annotation set.

  1. kubectl describe pod/<ingress-pod> | grep "linkerd.io/inject: ingress"

When it comes to ingress, most controllers do not rewrite the incoming header (example.com) to the internal service name (example.default.svc.cluster.local) by default. In this case, when Linkerd receives the outgoing request it thinks the request is destined for example.com and not example.default.svc.cluster.local. This creates an infinite loop that can be pretty frustrating!

Luckily, many ingress controllers allow you to either modify the Host header or add a custom header to the outgoing request. Here are some instructions for common ingress controllers:

If your ingress controller is terminating HTTPS, Linkerd will only provide TCP stats for the incoming requests because all the proxy sees is encrypted traffic. It will provide complete stats for the outgoing requests from your controller to the backend services as this is in plain text from the controller to Linkerd.

Note

If requests experience a 2-3 second delay after injecting your ingress controller, it is likely that this is because the service of type: LoadBalancer is obscuring the client source IP. You can fix this by setting externalTrafficPolicy: Local in the ingress’ service definition.

Note

While the Kubernetes Ingress API definition allows a backend’s servicePort to be a string value, only numeric servicePort values can be used with Linkerd. If a string value is encountered, Linkerd will default to using port 80.

Nginx

This uses emojivoto as an example, take a look at getting started for a refresher on how to install it.

The sample ingress definition is:

  1. # apiVersion: networking.k8s.io/v1beta1 # for k8s < v1.19
  2. apiVersion: networking.k8s.io/v1
  3. kind: Ingress
  4. metadata:
  5. name: web-ingress
  6. namespace: emojivoto
  7. annotations:
  8. nginx.ingress.kubernetes.io/configuration-snippet: |
  9. proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
  10. grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
  11. spec:
  12. ingressClassName: nginx
  13. rules:
  14. - host: example.com
  15. http:
  16. paths:
  17. - path: /
  18. pathType: Prefix
  19. backend:
  20. service:
  21. name: web-svc
  22. port:
  23. number: 80

The important annotation here is:

  1. nginx.ingress.kubernetes.io/configuration-snippet: |
  2. proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
  3. grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;

Note

If you are using auth-url you’d need to add the following snippet as well.

  1. nginx.ingress.kubernetes.io/auth-snippet: |
  2. proxy_set_header l5d-dst-override authn-name.authn-namespace.svc.cluster.local:authn-port;
  3. grpc_set_header l5d-dst-override authn-name.authn-namespace.svc.cluster.local:authn-port;

This example combines the two directives that NGINX uses for proxying HTTP and gRPC traffic. In practice, it is only necessary to set either the proxy_set_header or grpc_set_header directive, depending on the protocol used by the service, however NGINX will ignore any directives that it doesn’t need.

This sample ingress definition uses a single ingress for an application with multiple endpoints using different ports.

  1. # apiVersion: networking.k8s.io/v1beta1 # for k8s < v1.19
  2. apiVersion: networking.k8s.io/v1
  3. kind: Ingress
  4. metadata:
  5. name: web-ingress
  6. namespace: emojivoto
  7. annotations:
  8. nginx.ingress.kubernetes.io/configuration-snippet: |
  9. proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
  10. grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
  11. spec:
  12. ingressClassName: nginx
  13. rules:
  14. - host: example.com
  15. http:
  16. paths:
  17. - path: /
  18. pathType: Prefix
  19. backend:
  20. service:
  21. name: web-svc
  22. port:
  23. number: 80
  24. - path: /another-endpoint
  25. pathType: Prefix
  26. backend:
  27. service:
  28. name: another-svc
  29. port:
  30. number: 8080

Nginx will add a l5d-dst-override header to instruct Linkerd what service the request is destined for. You’ll want to include both the Kubernetes service FQDN (web-svc.emojivoto.svc.cluster.local) and the destination servicePort.

To test this, you’ll want to get the external IP address for your controller. If you installed nginx-ingress via helm, you can get that IP address by running:

  1. kubectl get svc --all-namespaces \
  2. -l app=nginx-ingress,component=controller \
  3. -o=custom-columns=EXTERNAL-IP:.status.loadBalancer.ingress[0].ip

You can then use this IP with curl:

  1. curl -H "Host: example.com" http://external-ip

Note

If you are using a default backend, you will need to create an ingress definition for that backend to ensure that the l5d-dst-override header is set. For example:

  1. # apiVersion: networking.k8s.io/v1beta1 # for k8s < v1.19
  2. apiVersion: networking.k8s.io/v1
  3. kind: Ingress
  4. metadata:
  5. name: default-ingress
  6. namespace: backends
  7. annotations:
  8. nginx.ingress.kubernetes.io/configuration-snippet: |
  9. proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
  10. grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
  11. spec:
  12. ingressClassName: nginx
  13. defaultBackend:
  14. service:
  15. name: default-backend
  16. port:
  17. number: 80

Traefik

This uses emojivoto as an example, take a look at getting started for a refresher on how to install it.

The simplest way to use Traefik as an ingress for Linkerd is to configure a Kubernetes Ingress resource with the ingress.kubernetes.io/custom-request-headers like this:

  1. # apiVersion: networking.k8s.io/v1beta1 # for k8s < v1.19
  2. apiVersion: networking.k8s.io/v1
  3. kind: Ingress
  4. metadata:
  5. name: web-ingress
  6. namespace: emojivoto
  7. annotations:
  8. ingress.kubernetes.io/custom-request-headers: l5d-dst-override:web-svc.emojivoto.svc.cluster.local:80
  9. spec:
  10. ingressClassName: traefik
  11. rules:
  12. - host: example.com
  13. http:
  14. paths:
  15. - path: /
  16. pathType: Prefix
  17. backend:
  18. service:
  19. name: web-svc
  20. port:
  21. number: 80

The important annotation here is:

  1. ingress.kubernetes.io/custom-request-headers: l5d-dst-override:web-svc.emojivoto.svc.cluster.local:80

Traefik will add a l5d-dst-override header to instruct Linkerd what service the request is destined for. You’ll want to include both the Kubernetes service FQDN (web-svc.emojivoto.svc.cluster.local) and the destination servicePort. Please see the Traefik website for more information.

To test this, you’ll want to get the external IP address for your controller. If you installed Traefik via helm, you can get that IP address by running:

  1. kubectl get svc --all-namespaces \
  2. -l app=traefik \
  3. -o='custom-columns=EXTERNAL-IP:.status.loadBalancer.ingress[0].ip'

You can then use this IP with curl:

  1. curl -H "Host: example.com" http://external-ip

Note

This solution won’t work if you’re using Traefik’s service weights as Linkerd will always send requests to the service name in l5d-dst-override. A workaround is to use traefik.frontend.passHostHeader: "false" instead. Be aware that if you’re using TLS, the connection between Traefik and the backend service will not be encrypted. There is an open issue to track the solution to this problem.

Traefik 2.x

Traefik 2.x adds support for path based request routing with a Custom Resource Definition (CRD) called IngressRoute.

If you choose to use IngressRoute instead of the default Kubernetes Ingress resource, then you’ll also need to use the Traefik’s Middleware Custom Resource Definition to add the l5d-dst-override header.

The YAML below uses the Traefik CRDs to produce the same results for the emojivoto application, as described above.

  1. apiVersion: traefik.containo.us/v1alpha1
  2. kind: Middleware
  3. metadata:
  4. name: l5d-header-middleware
  5. namespace: traefik
  6. spec:
  7. headers:
  8. customRequestHeaders:
  9. l5d-dst-override: "web-svc.emojivoto.svc.cluster.local:80"
  10. ---
  11. apiVersion: traefik.containo.us/v1alpha1
  12. kind: IngressRoute
  13. metadata:
  14. annotations:
  15. kubernetes.io/ingress.class: traefik
  16. creationTimestamp: null
  17. name: emojivoto-web-ingress-route
  18. namespace: emojivoto
  19. spec:
  20. entryPoints: []
  21. routes:
  22. - kind: Rule
  23. match: PathPrefix(`/`)
  24. priority: 0
  25. middlewares:
  26. - name: l5d-header-middleware
  27. services:
  28. - kind: Service
  29. name: web-svc
  30. port: 80

GCE

This example is similar to Traefik, and also uses emojivoto as an example. Take a look at getting started for a refresher on how to install it.

In addition to the custom headers found in the Traefik example, it shows how to use a Google Cloud Static External IP Address and TLS with a Google-managed certificate.

The sample ingress definition is:

  1. # apiVersion: networking.k8s.io/v1beta1 # for k8s < v1.19
  2. apiVersion: networking.k8s.io/v1
  3. kind: Ingress
  4. metadata:
  5. name: web-ingress
  6. namespace: emojivoto
  7. annotations:
  8. ingress.kubernetes.io/custom-request-headers: "l5d-dst-override: web-svc.emojivoto.svc.cluster.local:80"
  9. ingress.gcp.kubernetes.io/pre-shared-cert: "managed-cert-name"
  10. kubernetes.io/ingress.global-static-ip-name: "static-ip-name"
  11. spec:
  12. ingressClassName: gce
  13. rules:
  14. - host: example.com
  15. http:
  16. paths:
  17. - path: /
  18. pathType: Prefix
  19. backend:
  20. service:
  21. name: web-svc
  22. port:
  23. number: 80

To use this example definition, substitute managed-cert-name and static-ip-name with the short names defined in your project (n.b. use the name for the IP address, not the address itself).

The managed certificate will take about 30-60 minutes to provision, but the status of the ingress should be healthy within a few minutes. Once the managed certificate is provisioned, the ingress should be visible to the Internet.

Ambassador

This uses emojivoto as an example, take a look at getting started for a refresher on how to install it.

Ambassador does not use Ingress resources, instead relying on Service. The sample service definition is:

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: web-ambassador
  5. namespace: emojivoto
  6. annotations:
  7. getambassador.io/config: |
  8. ---
  9. apiVersion: ambassador/v1
  10. kind: Mapping
  11. name: web-ambassador-mapping
  12. service: http://web-svc.emojivoto.svc.cluster.local:80
  13. host: example.com
  14. prefix: /
  15. add_linkerd_headers: true
  16. spec:
  17. selector:
  18. app: web-svc
  19. ports:
  20. - name: http
  21. port: 80
  22. targetPort: http

The important annotation here is:

  1. add_linkerd_headers: true

Ambassador will add a l5d-dst-override header to instruct Linkerd what service the request is destined for. This will contain both the Kubernetes service FQDN (web-svc.emojivoto.svc.cluster.local) and the destination servicePort.

Note

To make this global, add add_linkerd_headers to your Module configuration.

To test this, you’ll want to get the external IP address for your controller. If you installed Ambassador via helm, you can get that IP address by running:

  1. kubectl get svc --all-namespaces \
  2. -l "app.kubernetes.io/name=ambassador" \
  3. -o='custom-columns=EXTERNAL-IP:.status.loadBalancer.ingress[0].ip'

Note

If you’ve installed the admin interface, this will return two IPs, one of which will be <none>. Just ignore that one and use the actual IP address.

You can then use this IP with curl:

  1. curl -H "Host: example.com" http://external-ip

Note

You can also find a more detailed guide for using Linkerd with Emissary Ingress, AKA Ambassador, from the folks over at Buoyant here.

Gloo

This uses books as an example, take a look at Demo: Books for instructions on how to run it.

If you installed Gloo using the Gateway method (gloo install gateway), then you’ll need a VirtualService to be able to route traffic to your Books application.

To use Gloo with Linkerd, you can choose one of two options.

Automatic

As of Gloo v0.13.20, Gloo has native integration with Linkerd, so that the required Linkerd headers are added automatically.

Assuming you installed gloo to the default location, you can enable the native integration by running:

  1. kubectl patch settings -n gloo-system default \
  2. -p '{"spec":{"linkerd":true}}' --type=merge

Gloo will now automatically add the l5d-dst-override header to every kubernetes upstream.

Now simply add a route to the books app upstream:

  1. glooctl add route --path-prefix=/ --dest-name booksapp-webapp-7000

Manual

As explained in the beginning of this document, you’ll need to instruct Gloo to add a header which will allow Linkerd to identify where to send traffic to.

  1. apiVersion: gateway.solo.io/v1
  2. kind: VirtualService
  3. metadata:
  4. name: books
  5. namespace: gloo-system
  6. spec:
  7. virtualHost:
  8. domains:
  9. - '*'
  10. name: gloo-system.books
  11. routes:
  12. - matcher:
  13. prefix: /
  14. routeAction:
  15. single:
  16. upstream:
  17. name: booksapp-webapp-7000
  18. namespace: gloo-system
  19. routePlugins:
  20. transformations:
  21. requestTransformation:
  22. transformationTemplate:
  23. headers:
  24. l5d-dst-override:
  25. text: webapp.booksapp.svc.cluster.local:7000
  26. passthrough: {}

The important annotation here is:

  1. routePlugins:
  2. transformations:
  3. requestTransformation:
  4. transformationTemplate:
  5. headers:
  6. l5d-dst-override:
  7. text: webapp.booksapp.svc.cluster.local:7000
  8. passthrough: {}

Using the content transformation engine built-in in Gloo, you can instruct it to add the needed l5d-dst-override header which in the example above is pointing to the service’s FDQN and port: webapp.booksapp.svc.cluster.local:7000

Test

To easily test this you can get the URL of the Gloo proxy by running:

  1. glooctl proxy URL

Which will return something similar to:

  1. $ glooctl proxy url
  2. http://192.168.99.132:30969

For the example VirtualService above, which listens to any domain and path, accessing the proxy URL (http://192.168.99.132:30969) in your browser should open the Books application.

Contour

Contour doesn’t support setting the l5d-dst-override header automatically. The following example uses the Contour getting started documentation to demonstrate how to set the required header manually.

The Envoy DaemonSet doesn’t auto-mount the service account token, which is required for the Linkerd proxy to do mTLS between pods. So first we need to install Contour uninjected, patch the DaemonSet with automountServiceAccountToken: true, and then inject it. Optionally you can create a dedicated service account to avoid using the default one.

  1. # install Contour
  2. kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
  3. # create a service account (optional)
  4. kubectl apply -f - << EOF
  5. apiVersion: v1
  6. kind: ServiceAccount
  7. metadata:
  8. name: envoy
  9. namespace: projectcontour
  10. EOF
  11. # add service account to envoy (optional)
  12. kubectl patch daemonset envoy -n projectcontour --type json -p='[{"op": "add", "path": "/spec/template/spec/serviceAccount", "value": "envoy"}]'
  13. # auto mount the service account token (required)
  14. kubectl patch daemonset envoy -n projectcontour --type json -p='[{"op": "replace", "path": "/spec/template/spec/automountServiceAccountToken", "value": true}]'
  15. # inject linkerd first into the DaemonSet
  16. kubectl -n projectcontour get daemonset -oyaml | linkerd inject - | kubectl apply -f -
  17. # inject linkerd into the Deployment
  18. kubectl -n projectcontour get deployment -oyaml | linkerd inject - | kubectl apply -f -

Verify your Contour and Envoy installation has a running Linkerd sidecar.

Next we’ll deploy a demo service:

  1. linkerd inject https://projectcontour.io/examples/kuard.yaml | kubectl apply -f -

To route external traffic to your service you’ll need to provide a HTTPProxy:

  1. apiVersion: projectcontour.io/v1
  2. kind: HTTPProxy
  3. metadata:
  4. name: kuard
  5. namespace: default
  6. spec:
  7. routes:
  8. - requestHeadersPolicy:
  9. set:
  10. - name: l5d-dst-override
  11. value: kuard.default.svc.cluster.local:80
  12. services:
  13. - name: kuard
  14. port: 80
  15. virtualhost:
  16. fqdn: 127.0.0.1.nip.io

Notice the l5d-dst-override header is explicitly set to the target service.

Finally, you can test your working service mesh:

  1. kubectl port-forward svc/envoy -n projectcontour 3200:80
  2. http://127.0.0.1.nip.io:3200

Note

If you are injecting the Envoy DaemonSet using proxy ingress mode then make sure to annotate the pod spec with config.linkerd.io/skip-outbound-ports: 8001. The Envoy pod will try to connect to the Contour pod at port 8001 through TLS, which is not supported under this ingress mode, so you need to have the proxy skip that outbound port.

Note

If you are using Contour with flagger the l5d-dst-override headers will be set automatically.

Kong

Kong doesn’t support the header l5d-dst-override automatically.
This documentation will use the following elements:

Before installing the Emojivoto demo application, install Linkerd and Kong on your cluster. Remember when injecting the Kong deployment to use the --ingress flag (or annotation) as mentioned above!

We need to declare these objects as well:

  • KongPlugin, a CRD provided by Kong
  • Ingress

    1. apiVersion: configuration.konghq.com/v1
    2. kind: KongPlugin
    3. metadata:
    4. name: set-l5d-header
    5. namespace: emojivoto
    6. plugin: request-transformer
    7. config:
    8. add:
    9. headers:
    10. - l5d-dst-override:$(headers.host).svc.cluster.local
    11. ---
    12. # apiVersion: networking.k8s.io/v1beta1 # for k8s < v1.19
    13. apiVersion: networking.k8s.io/v1
    14. kind: Ingress
    15. metadata:
    16. name: web-ingress
    17. namespace: emojivoto
    18. annotations:
    19. konghq.com/plugins: set-l5d-header
    20. spec:
    21. ingressClassName: kong
    22. rules:
    23. - http:
    24. paths:
    25. - path: /api/vote
    26. pathType: Prefix
    27. backend:
    28. service:
    29. name: web-svc
    30. port:
    31. number: http
    32. - path: /api/list
    33. pathType: Prefix
    34. backend:
    35. service:
    36. name: web-svc
    37. port:
    38. name: http

We are explicitly setting the l5d-dst-override in the KongPlugin. Using templates as values, we can use the host header from requests and set the l5d-dst-override value based off that.

Finally, lets install Emojivoto so that it’s deploy/vote-bot targets the ingress and includes a host header value for the web-svc.emojivoto service.

Before applying the injected Emojivoto application, make the following changes to the vote-bot Deployment:

  1. env:
  2. # Target the Kong ingress instead of the Emojivoto web service
  3. - name: WEB_HOST
  4. value: kong-proxy.kong:80
  5. # Override the host header on requests so that it can be used to set the l5d-dst-override header
  6. - name: HOST_OVERRIDE
  7. value: web-svc.emojivoto