Using Ingress

If you’re planning on injecting Linkerd into your ingress controller’s pods there is some configuration required. Linkerd discovers services based on the :authority or Host header. This allows Linkerd to understand what service a request is destined for without being dependent on DNS or IPs.

When it comes to ingress, most controllers do not rewrite the incoming header (example.com) to the internal service name (example.default.svc.cluster.local) by default. In this case, when Linkerd receives the outgoing request it thinks the request is destined for example.com and not example.default.svc.cluster.local. This creates an infinite loop that can be pretty frustrating!

Luckily, many ingress controllers allow you to either modify the Host header or add a custom header to the outgoing request. Here are some instructions for common ingress controllers:

If your ingress controller is terminating HTTPS, Linkerd will only provide TCP stats for the incoming requests because all the proxy sees is encrypted traffic. It will provide complete stats for the outgoing requests from your controller to the backend services as this is in plain text from the controller to Linkerd.

Note

If requests experience a 2-3 second delay after injecting your ingress controller, it is likely that this is because the service of type: LoadBalancer is obscuring the client source IP. You can fix this by setting externalTrafficPolicy: Local in the ingress’ service definition.

Nginx

This uses emojivoto as an example, take a look at getting started for a refresher on how to install it.

The sample ingress definition is:

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: web-ingress
  5. namespace: emojivoto
  6. annotations:
  7. kubernetes.io/ingress.class: "nginx"
  8. nginx.ingress.kubernetes.io/configuration-snippet: |
  9. proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
  10. grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
  11. spec:
  12. rules:
  13. - host: example.com
  14. http:
  15. paths:
  16. - backend:
  17. serviceName: web-svc
  18. servicePort: 80

The important annotation here is:

  1. nginx.ingress.kubernetes.io/configuration-snippet: |
  2. proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
  3. grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;

Note

If you are using auth-url you’d need to add the following snippet as well.

  1. nginx.ingress.kubernetes.io/auth-snippet: |
  2. proxy_set_header l5d-dst-override authn-name.authn-namespace.svc.cluster.local:authn-port;
  3. grpc_set_header l5d-dst-override authn-name.authn-namespace.svc.cluster.local:authn-port;

This example combines the two directives that NGINX uses for proxying HTTP and gRPC traffic. In practice, it is only necessary to set either the proxy_set_header or grpc_set_header directive, depending on the protocol used by the service, however NGINX will ignore any directives that it doesn’t need.

This sample ingress definition uses a single ingress for an application with multiple endpoints using different ports.

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: web-ingress
  5. namespace: emojivoto
  6. annotations:
  7. kubernetes.io/ingress.class: "nginx"
  8. nginx.ingress.kubernetes.io/configuration-snippet: |
  9. proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
  10. grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
  11. spec:
  12. rules:
  13. - host: example.com
  14. http:
  15. paths:
  16. - path: /
  17. backend:
  18. serviceName: web-svc
  19. servicePort: 80
  20. - path: /another-endpoint
  21. backend:
  22. serviceName: another-svc
  23. servicePort: 8080

Nginx will add a l5d-dst-override header to instruct Linkerd what service the request is destined for. You’ll want to include both the Kubernetes service FQDN (web-svc.emojivoto.svc.cluster.local) and the destination servicePort.

To test this, you’ll want to get the external IP address for your controller. If you installed nginx-ingress via helm, you can get that IP address by running:

  1. kubectl get svc --all-namespaces \
  2. -l app=nginx-ingress,component=controller \
  3. -o=custom-columns=EXTERNAL-IP:.status.loadBalancer.ingress[0].ip

You can then use this IP with curl:

  1. curl -H "Host: example.com" http://external-ip

Note

It is not possible to rewrite the header in this way for the default backend. Because of this, if you inject Linkerd into your Nginx ingress controller’s pod, the default backend will not be usable.

Traefik

This uses emojivoto as an example, take a look at getting started for a refresher on how to install it.

The sample ingress definition is:

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: web-ingress
  5. namespace: emojivoto
  6. annotations:
  7. kubernetes.io/ingress.class: "traefik"
  8. ingress.kubernetes.io/custom-request-headers: l5d-dst-override:web-svc.emojivoto.svc.cluster.local:80
  9. spec:
  10. rules:
  11. - host: example.com
  12. http:
  13. paths:
  14. - backend:
  15. serviceName: web-svc
  16. servicePort: 80

The important annotation here is:

  1. ingress.kubernetes.io/custom-request-headers: l5d-dst-override:web-svc.emojivoto.svc.cluster.local:80

Traefik will add a l5d-dst-override header to instruct Linkerd what service the request is destined for. You’ll want to include both the Kubernetes service FQDN (web-svc.emojivoto.svc.cluster.local) and the destination servicePort. Please see the Traefik website for more information.

To test this, you’ll want to get the external IP address for your controller. If you installed Traefik via helm, you can get that IP address by running:

  1. kubectl get svc --all-namespaces \
  2. -l app=traefik \
  3. -o='custom-columns=EXTERNAL-IP:.status.loadBalancer.ingress[0].ip'

You can then use this IP with curl:

  1. curl -H "Host: example.com" http://external-ip

Note

This solution won’t work if you’re using Traefik’s service weights as Linkerd will always send requests to the service name in l5d-dst-override. A workaround is to use traefik.frontend.passHostHeader: "false" instead. Be aware that if you’re using TLS, the connection between Traefik and the backend service will not be encrypted. There is an open issue to track the solution to this problem.

GCE

This example is similar to Traefik, and also uses emojivoto as an example. Take a look at getting started for a refresher on how to install it.

In addition to the custom headers found in the Traefik example, it shows how to use a Google Cloud Static External IP Address and TLS with a Google-managed certificate.

The sample ingress definition is:

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: web-ingress
  5. namespace: emojivoto
  6. annotations:
  7. kubernetes.io/ingress.class: "gce"
  8. ingress.kubernetes.io/custom-request-headers: "l5d-dst-override: web-svc.emojivoto.svc.cluster.local:80"
  9. ingress.gcp.kubernetes.io/pre-shared-cert: "managed-cert-name"
  10. kubernetes.io/ingress.global-static-ip-name: "static-ip-name"
  11. spec:
  12. rules:
  13. - host: example.com
  14. http:
  15. paths:
  16. - backend:
  17. serviceName: web-svc
  18. servicePort: 80

To use this example definition, substitute managed-cert-name and static-ip-name with the short names defined in your project (n.b. use the name for the IP address, not the address itself).

The managed certificate will take about 30-60 minutes to provision, but the status of the ingress should be healthy within a few minutes. Once the managed certificate is provisioned, the ingress should be visible to the Internet.

Ambassador

This uses emojivoto as an example, take a look at getting started for a refresher on how to install it.

Ambassador does not use Ingress resources, instead relying on Service. The sample service definition is:

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: web-ambassador
  5. namespace: emojivoto
  6. annotations:
  7. getambassador.io/config: |
  8. ---
  9. apiVersion: ambassador/v1
  10. kind: Mapping
  11. name: web-ambassador-mapping
  12. service: http://web-svc.emojivoto.svc.cluster.local:80
  13. host: example.com
  14. prefix: /
  15. add_linkerd_headers: true
  16. spec:
  17. selector:
  18. app: web-svc
  19. ports:
  20. - name: http
  21. port: 80
  22. targetPort: http

The important annotation here is:

  1. add_linkerd_headers: true

Ambassador will add a l5d-dst-override header to instruct Linkerd what service the request is destined for. This will contain both the Kubernetes service FQDN (web-svc.emojivoto.svc.cluster.local) and the destination servicePort.

Note

To make this global, add add_linkerd_headers to your Module configuration.

To test this, you’ll want to get the external IP address for your controller. If you installed Ambassador via helm, you can get that IP address by running:

  1. kubectl get svc --all-namespaces \
  2. -l "app.kubernetes.io/name=ambassador" \
  3. -o='custom-columns=EXTERNAL-IP:.status.loadBalancer.ingress[0].ip'

Note

If you’ve installed the admin interface, this will return two IPs, one of which will be <none>. Just ignore that one and use the actual IP address.

You can then use this IP with curl:

  1. curl -H "Host: example.com" http://external-ip

Gloo

This uses books as an example, take a look at Demo: Books for instructions on how to run it.

If you installed Gloo using the Gateway method (gloo install gateway), then you’ll need a VirtualService to be able to route traffic to your Books application.

To use Gloo with Linkerd, you can choose one of two options.

Automatic

As of Gloo v0.13.20, Gloo has native integration with Linkerd, so that the required Linkerd headers are added automatically.

Assuming you installed gloo to the default location, you can enable the native integration by running:

  1. kubectl patch settings -n gloo-system default \
  2. -p '{"spec":{"linkerd":true}}' --type=merge

Gloo will now automatically add the l5d-dst-override header to every kubernetes upstream.

Now simply add a route to the books app upstream:

  1. glooctl add route --path-prefix=/ --dest-name booksapp-webapp-7000

Manual

As explained in the beginning of this document, you’ll need to instruct Gloo to add a header which will allow Linkerd to identify where to send traffic to.

  1. apiVersion: gateway.solo.io/v1
  2. kind: VirtualService
  3. metadata:
  4. name: books
  5. namespace: gloo-system
  6. spec:
  7. virtualHost:
  8. domains:
  9. - '*'
  10. name: gloo-system.books
  11. routes:
  12. - matcher:
  13. prefix: /
  14. routeAction:
  15. single:
  16. upstream:
  17. name: booksapp-webapp-7000
  18. namespace: gloo-system
  19. routePlugins:
  20. transformations:
  21. requestTransformation:
  22. transformationTemplate:
  23. headers:
  24. l5d-dst-override:
  25. text: webapp.booksapp.svc.cluster.local:7000
  26. passthrough: {}

The important annotation here is:

  1. routePlugins:
  2. transformations:
  3. requestTransformation:
  4. transformationTemplate:
  5. headers:
  6. l5d-dst-override:
  7. text: webapp.booksapp.svc.cluster.local:7000
  8. passthrough: {}

Using the content transformation engine built-in in Gloo, you can instruct it to add the needed l5d-dst-override header which in the example above is pointing to the service’s FDQN and port: webapp.booksapp.svc.cluster.local:7000

Test

To easily test this you can get the URL of the Gloo proxy by running:

  1. glooctl proxy URL

Which will return something similar to:

  1. $ glooctl proxy url
  2. http://192.168.99.132:30969

For the example VirtualService above, which listens to any domain and path, accessing the proxy URL (http://192.168.99.132:30969) in your browser should open the Books application.

Contour

Contour doesn’t support setting the l5d-dst-override header automatically. The following example uses the Contour getting started documentation to demonstrate how to set the required header manually:

First, inject Linkerd into your Contour installation:

  1. linkerd inject https://projectcontour.io/quickstart/contour.yaml | kubectl apply -f -

Envoy will not auto mount the service account token. To fix this you need to set automountServiceAccountToken: true. Optionally you can create a dedicated service account to avoid using the default.

  1. # create a service account (optional)
  2. kubectl apply -f - << EOF
  3. apiVersion: v1
  4. kind: ServiceAccount
  5. metadata:
  6. name: envoy
  7. namespace: projectcontour
  8. EOF
  9. # add service account to envoy (optional)
  10. kubectl patch daemonset envoy -n projectcontour --type json -p='[{"op": "add", "path": "/spec/template/spec/serviceAccount", "value": "envoy"}]'
  11. # auto mount the service account token (required)
  12. kubectl patch daemonset envoy -n projectcontour --type json -p='[{"op": "replace", "path": "/spec/template/spec/automountServiceAccountToken", "value": true}]'

Verify your Contour and Envoy installation has a running Linkerd sidecar.

Next we’ll deploy a demo service:

  1. linkerd inject https://projectcontour.io/examples/kuard.yaml | kubectl apply -f -

To route external traffic to your service you’ll need to provide a HTTPProxy:

  1. apiVersion: projectcontour.io/v1
  2. kind: HTTPProxy
  3. metadata:
  4. name: kuard
  5. namespace: default
  6. spec:
  7. routes:
  8. - requestHeadersPolicy:
  9. set:
  10. - name: l5d-dst-override
  11. value: kuard.default.svc.cluster.local:80
  12. services:
  13. - name: kuard
  14. namespace: default
  15. port: 80
  16. virtualhost:
  17. fqdn: 127.0.0.1.xip.io

Notice the l5d-dst-override header is explicitly set to the target service.

Finally, you can test your working service mesh:

  1. kubectl port-forward svc/envoy -n projectcontour 3200:80
  2. http://127.0.0.1.xip.io:3200

Note

If you are using Contour with flagger the l5d-dst-override headers will be set automatically.