Bringing your own Prometheus

Even though linkerd-viz extension comes with its own Prometheus instance, there can be cases where using an external instance makes more sense for various reasons.

Note

Note that this approach requires you to manually add and maintain additional scrape configuration in your Prometheus configuration. If you prefer to use the default Linkerd Prometheus, you can export the metrics to your existing monitoring infrastructure following these instructions.

This tutorial shows how to configure an external Prometheus instance to scrape both the control plane as well as the proxy’s metrics in a format that is consumable both by a user as well as Linkerd control plane components like web, etc.

There are two important points to tackle here.

  • Configuring external Prometheus instance to get the Linkerd metrics.
  • Configuring the linkerd-viz extension to use that Prometheus.

Prometheus Scrape Configuration

The following scrape configuration has to be applied to the external Prometheus instance.

Note

The below scrape configuration is a subset of linkerd-prometheus scrape configuration.

Before applying, it is important to replace templated values (present in {{}}) with direct values for the below configuration to work.

  1. - job_name: 'linkerd-controller'
  2. kubernetes_sd_configs:
  3. - role: pod
  4. namespaces:
  5. names:
  6. - '{{.Values.linkerdNamespace}}'
  7. - '{{.Values.namespace}}'
  8. relabel_configs:
  9. - source_labels:
  10. - __meta_kubernetes_pod_container_port_name
  11. action: keep
  12. regex: admin-http
  13. - source_labels: [__meta_kubernetes_pod_container_name]
  14. action: replace
  15. target_label: component
  16. - job_name: 'linkerd-service-mirror'
  17. kubernetes_sd_configs:
  18. - role: pod
  19. relabel_configs:
  20. - source_labels:
  21. - __meta_kubernetes_pod_label_linkerd_io_control_plane_component
  22. - __meta_kubernetes_pod_container_port_name
  23. action: keep
  24. regex: linkerd-service-mirror;admin-http$
  25. - source_labels: [__meta_kubernetes_pod_container_name]
  26. action: replace
  27. target_label: component
  28. - job_name: 'linkerd-proxy'
  29. kubernetes_sd_configs:
  30. - role: pod
  31. relabel_configs:
  32. - source_labels:
  33. - __meta_kubernetes_pod_container_name
  34. - __meta_kubernetes_pod_container_port_name
  35. - __meta_kubernetes_pod_label_linkerd_io_control_plane_ns
  36. action: keep
  37. regex: ^{{default .Values.proxyContainerName "linkerd-proxy" .Values.proxyContainerName}};linkerd-admin;{{.Values.linkerdNamespace}}$
  38. - source_labels: [__meta_kubernetes_namespace]
  39. action: replace
  40. target_label: namespace
  41. - source_labels: [__meta_kubernetes_pod_name]
  42. action: replace
  43. target_label: pod
  44. # special case k8s' "job" label, to not interfere with prometheus' "job"
  45. # label
  46. # __meta_kubernetes_pod_label_linkerd_io_proxy_job=foo =>
  47. # k8s_job=foo
  48. - source_labels: [__meta_kubernetes_pod_label_linkerd_io_proxy_job]
  49. action: replace
  50. target_label: k8s_job
  51. # drop __meta_kubernetes_pod_label_linkerd_io_proxy_job
  52. - action: labeldrop
  53. regex: __meta_kubernetes_pod_label_linkerd_io_proxy_job
  54. # __meta_kubernetes_pod_label_linkerd_io_proxy_deployment=foo =>
  55. # deployment=foo
  56. - action: labelmap
  57. regex: __meta_kubernetes_pod_label_linkerd_io_proxy_(.+)
  58. # drop all labels that we just made copies of in the previous labelmap
  59. - action: labeldrop
  60. regex: __meta_kubernetes_pod_label_linkerd_io_proxy_(.+)
  61. # __meta_kubernetes_pod_label_linkerd_io_foo=bar =>
  62. # foo=bar
  63. - action: labelmap
  64. regex: __meta_kubernetes_pod_label_linkerd_io_(.+)
  65. # Copy all pod labels to tmp labels
  66. - action: labelmap
  67. regex: __meta_kubernetes_pod_label_(.+)
  68. replacement: __tmp_pod_label_$1
  69. # Take `linkerd_io_` prefixed labels and copy them without the prefix
  70. - action: labelmap
  71. regex: __tmp_pod_label_linkerd_io_(.+)
  72. replacement: __tmp_pod_label_$1
  73. # Drop the `linkerd_io_` originals
  74. - action: labeldrop
  75. regex: __tmp_pod_label_linkerd_io_(.+)
  76. # Copy tmp labels into real labels
  77. - action: labelmap
  78. regex: __tmp_pod_label_(.+)

The running configuration of the builtin prometheus can be used as a reference.

  1. kubectl -n linkerd-viz get configmap prometheus-config -o yaml

Linkerd-Viz Extension Configuration

Linkerd’s viz extension components like metrics-api, etc depend on the Prometheus instance to power the dashboard and CLI.

The prometheusUrl field gives you a single place through which all these components can be configured to an external Prometheus URL. This is allowed both through the CLI and Helm.

CLI

This can be done by passing a file with the above field to the values flag, which is available through linkerd viz install command.

  1. prometheusUrl: existing-prometheus.xyz:9090

Once applied, this configuration is not persistent across installs. The same has to be passed again by the user during re-installs, upgrades, etc.

When using an external Prometheus and configuring the prometheusUrl field, Linkerd’s Prometheus will still be included in installation. If you wish to disable it, be sure to include the following configuration as well:

  1. prometheus:
  2. enabled: false

Helm

The same configuration can be applied through values.yaml when using Helm. Once applied, Helm makes sure that the configuration is persistent across upgrades.

More information on installation through Helm can be found here