Traffic Trace

This policy enables tracing logging to a third party tracing solution.

Tracing is supported over HTTP, HTTP2, and gRPC protocols. You must explicitly specify the protocol for each service and data plane proxy you want to enable tracing for.

You must also:

  1. Add a tracing backend. You specify a tracing backend as a Mesh resource property.
  2. Add a TrafficTrace resource. You pass the backend to the TrafficTrace resource.

Kuma currently supports the following trace exposition formats:

Services still need to be instrumented to preserve the trace chain across requests made across different services.

You can instrument with a language library of your choice (for zipkin and for datadog). For HTTP you can also manually forward the following headers:

  • x-request-id
  • x-b3-traceid
  • x-b3-parentspanid
  • x-b3-spanid
  • x-b3-sampled
  • x-b3-flags

Add a tracing backend to the mesh

Zipkin

This assumes you already have a zipkin compatible collector running. If you haven’t, read the observability docs.

  1. apiVersion: kuma.io/v1alpha1
  2. kind: Mesh
  3. metadata:
  4. name: default
  5. spec:
  6. tracing:
  7. defaultBackend: jaeger-collector
  8. backends:
  9. - name: jaeger-collector
  10. type: zipkin
  11. sampling: 100.0
  12. conf:
  13. url: http://jaeger-collector.mesh-observability:9411/api/v2/spans # If not using `kuma install observability` replace by any zipkin compatible collector address.

Apply the configuration with kubectl apply -f [..].

  1. type: Mesh
  2. name: default
  3. tracing:
  4. defaultBackend: jaeger-collector
  5. backends:
  6. - name: jaeger-collector
  7. type: zipkin
  8. sampling: 100.0
  9. conf:
  10. url: http://my-jaeger-collector:9411/api/v2/spans # Replace by any zipkin compatible collector address.

Apply the configuration with kumactl apply -f [..] or with the HTTP API.

Datadog

This assumes a Datadog agent is configured and running. If you haven’t already check the Datadog observability page.

  1. apiVersion: kuma.io/v1alpha1
  2. kind: Mesh
  3. metadata:
  4. name: default
  5. spec:
  6. tracing:
  7. defaultBackend: datadog-collector
  8. backends:
  9. - name: datadog-collector
  10. type: datadog
  11. sampling: 100.0
  12. conf:
  13. address: trace-svc.datadog.svc.cluster.local
  14. port: 8126

where trace-svc is the name of the Kubernetes Service you specified when you configured the Datadog APM agent.

Apply the configuration with kubectl apply -f [..].

  1. type: Mesh
  2. name: default
  3. tracing:
  4. defaultBackend: datadog-collector
  5. backends:
  6. - name: datadog-collector
  7. type: datadog
  8. sampling: 100.0
  9. conf:
  10. address: 127.0.0.1
  11. port: 8126

Apply the configuration with kumactl apply -f [..] or with the HTTP API.

The defaultBackend property specifies the tracing backend to use if it’s not explicitly specified in the TrafficTrace resource.

Add TrafficTrace resource

Next, create TrafficTrace resources that specify how to collect traces, and which backend to send them to.

  1. apiVersion: kuma.io/v1alpha1
  2. kind: TrafficTrace
  3. mesh: default
  4. metadata:
  5. name: trace-all-traffic
  6. spec:
  7. selectors:
  8. - match:
  9. kuma.io/service: '*'
  10. conf:
  11. backend: jaeger-collector # or the name of any backend defined for the mesh

Apply the configuration with kubectl apply -f [..].

  1. type: TrafficTrace
  2. name: trace-all-traffic
  3. mesh: default
  4. selectors:
  5. - match:
  6. kuma.io/service: '*'
  7. conf:
  8. backend: jaeger-collector # or the name of any backend defined for the mesh

Apply the configuration with kumactl apply -f [..] or with the HTTP API.

When backend field is omitted, the logs will be forwarded into the defaultBackend of that Mesh.

You can also add tags to apply the TrafficTrace resource only a subset of data plane proxies. TrafficTrace is a Dataplane policy, so you can specify any of the selectors tags.

While most commonly we want all the traces to be sent to the same tracing backend, we can optionally create multiple tracing backends in a Mesh resource and store traces for different paths of our service traffic in different backends by leveraging Kuma tags. This is especially useful when we want traces to never leave a world region, or a cloud, for example.