Bring your own Prometheus instance

Even though Linkerd comes with its own Prometheus instance, there can be cases where using an external instance makes more sense for various reasons.

Note

Note that this approach requires you to manually add and maintain additional scrape configuration in your Prometheus configuration. If you prefer to use the default Linkerd Prometheus add-on, you can export the metrics to your existing monitoring infrastructure following the instructions at https://linkerd.io/2/tasks/exporting-metrics/

This tutorial shows how to configure an external Prometheus instance to scrape both the control plane as well as the proxy’s metrics in a format that is consumable both by a user as well as Linkerd control plane components like web, etc.

There are two important points to tackle here.

  • Configuring external Prometheus instance to get the Linkerd metrics.
  • Configuring the Linkerd control plane components to use that Prometheus.

Prometheus Scrape Configuration

The following scrape configuration has to be applied to the external Prometheus instance.

Note

The below scrape configuration is a subset of linkerd-prometheus scrape configuration.

Before applying, it is important to replace templated values (present in {{}}) with direct values for the below configuration to work.

  1. - job_name: 'linkerd-controller'
  2. scrape_interval: 10s
  3. scrape_timeout: 10s
  4. kubernetes_sd_configs:
  5. - role: pod
  6. namespaces:
  7. names: ['{{.Values.global.namespace}}']
  8. relabel_configs:
  9. - source_labels:
  10. - __meta_kubernetes_pod_label_linkerd_io_control_plane_component
  11. - __meta_kubernetes_pod_container_port_name
  12. action: keep
  13. regex: (.*);admin-http$
  14. - source_labels: [__meta_kubernetes_pod_container_name]
  15. action: replace
  16. target_label: component
  17. - job_name: 'linkerd-service-mirror'
  18. scrape_interval: 10s
  19. scrape_timeout: 10s
  20. kubernetes_sd_configs:
  21. - role: pod
  22. relabel_configs:
  23. - source_labels:
  24. - __meta_kubernetes_pod_label_linkerd_io_control_plane_component
  25. - __meta_kubernetes_pod_container_port_name
  26. action: keep
  27. regex: linkerd-service-mirror;admin-http$
  28. - source_labels: [__meta_kubernetes_pod_container_name]
  29. action: replace
  30. target_label: component
  31. - job_name: 'linkerd-proxy'
  32. scrape_interval: 10s
  33. scrape_timeout: 10s
  34. kubernetes_sd_configs:
  35. - role: pod
  36. relabel_configs:
  37. - source_labels:
  38. - __meta_kubernetes_pod_container_name
  39. - __meta_kubernetes_pod_container_port_name
  40. - __meta_kubernetes_pod_label_linkerd_io_control_plane_ns
  41. action: keep
  42. regex: ^{{default .Values.global.proxyContainerName "linkerd-proxy" .Values.global.proxyContainerName}};linkerd-admin;{{.Values.global.namespace}}$
  43. - source_labels: [__meta_kubernetes_namespace]
  44. action: replace
  45. target_label: namespace
  46. - source_labels: [__meta_kubernetes_pod_name]
  47. action: replace
  48. target_label: pod
  49. # special case k8s' "job" label, to not interfere with prometheus' "job"
  50. # label
  51. # __meta_kubernetes_pod_label_linkerd_io_proxy_job=foo =>
  52. # k8s_job=foo
  53. - source_labels: [__meta_kubernetes_pod_label_linkerd_io_proxy_job]
  54. action: replace
  55. target_label: k8s_job
  56. # drop __meta_kubernetes_pod_label_linkerd_io_proxy_job
  57. - action: labeldrop
  58. regex: __meta_kubernetes_pod_label_linkerd_io_proxy_job
  59. # __meta_kubernetes_pod_label_linkerd_io_proxy_deployment=foo =>
  60. # deployment=foo
  61. - action: labelmap
  62. regex: __meta_kubernetes_pod_label_linkerd_io_proxy_(.+)
  63. # drop all labels that we just made copies of in the previous labelmap
  64. - action: labeldrop
  65. regex: __meta_kubernetes_pod_label_linkerd_io_proxy_(.+)
  66. # __meta_kubernetes_pod_label_linkerd_io_foo=bar =>
  67. # foo=bar
  68. - action: labelmap
  69. regex: __meta_kubernetes_pod_label_linkerd_io_(.+)
  70. # Copy all pod labels to tmp labels
  71. - action: labelmap
  72. regex: __meta_kubernetes_pod_label_(.+)
  73. replacement: __tmp_pod_label_$1
  74. # Take `linkerd_io_` prefixed labels and copy them without the prefix
  75. - action: labelmap
  76. regex: __tmp_pod_label_linkerd_io_(.+)
  77. replacement: __tmp_pod_label_$1
  78. # Drop the `linkerd_io_` originals
  79. - action: labeldrop
  80. regex: __tmp_pod_label_linkerd_io_(.+)
  81. # Copy tmp labels into real labels
  82. - action: labelmap
  83. regex: __tmp_pod_label_(.+)

The running configuration of the builtin prometheus can be used as a reference.

  1. kubectl -n linkerd get configmap linkerd-prometheus-config -o yaml

Control Plane Components Configuration

Linkerd’s control plane components like public-api, etc depend on the Prometheus instance to power the dashboard and CLI.

The global.prometheusUrl field gives you a single place through which all these components can be configured to an external Prometheus URL. This is allowed both through the CLI and Helm.

CLI

This can be done by passing a file with the above field to the config flag, which is available both through linkerd install and linkerd upgrade commands

  1. global:
  2. prometheusUrl: existing-prometheus.xyz:9090

Once applied, this configuration is persistent across upgrades, without having the user passing it again. The same can be overwritten as needed.

When using an external Prometheus and configuring the global.prometheusUrl field, Linkerd’s Prometheus will still be included in installation.

If you wish to disable this included Prometheus, be sure to include the following configuration as well:

  1. prometheus:
  2. enabled: false

Helm

The same configuration can be applied through values.yaml when using Helm. Once applied, Helm makes sure that the configuration is persistent across upgrades.

More information on installation through Helm can be found here