Bringing your own Prometheus
Even though the linkerd-viz extension comes with its own Prometheus instance, there can be cases where using an external instance makes more sense for various reasons.
This tutorial shows how to configure an external Prometheus instance to scrape both the control plane as well as the proxy’s metrics in a format that is consumable both by a user as well as Linkerd control plane components like web, etc.
There are two important points to tackle here.
- Configuring external Prometheus instance to get the Linkerd metrics.
- Configuring the linkerd-viz extension to use that Prometheus.
Prometheus Scrape Configuration
The following scrape configuration has to be applied to the external Prometheus instance.
Note
The below scrape configuration is a subset of the full linkerd-prometheus scrape configuration.
Before applying, it is important to replace templated values (present in {{}}) with direct values for the below configuration to work.
- job_name: 'linkerd-controller'kubernetes_sd_configs:- role: podnamespaces:names:- '{{.Values.linkerdNamespace}}'- '{{.Values.namespace}}'relabel_configs:- source_labels:- __meta_kubernetes_pod_container_port_nameaction: keepregex: admin-http- source_labels: [__meta_kubernetes_pod_container_name]action: replacetarget_label: component- job_name: 'linkerd-service-mirror'kubernetes_sd_configs:- role: podrelabel_configs:- source_labels:- __meta_kubernetes_pod_label_linkerd_io_control_plane_component- __meta_kubernetes_pod_container_port_nameaction: keepregex: linkerd-service-mirror;admin-http$- source_labels: [__meta_kubernetes_pod_container_name]action: replacetarget_label: component- job_name: 'linkerd-proxy'kubernetes_sd_configs:- role: podrelabel_configs:- source_labels:- __meta_kubernetes_pod_container_name- __meta_kubernetes_pod_container_port_name- __meta_kubernetes_pod_label_linkerd_io_control_plane_nsaction: keepregex: ^{{default .Values.proxyContainerName "linkerd-proxy" .Values.proxyContainerName}};linkerd-admin;{{.Values.linkerdNamespace}}$- source_labels: [__meta_kubernetes_namespace]action: replacetarget_label: namespace- source_labels: [__meta_kubernetes_pod_name]action: replacetarget_label: pod# special case k8s' "job" label, to not interfere with prometheus' "job"# label# __meta_kubernetes_pod_label_linkerd_io_proxy_job=foo =># k8s_job=foo- source_labels: [__meta_kubernetes_pod_label_linkerd_io_proxy_job]action: replacetarget_label: k8s_job# drop __meta_kubernetes_pod_label_linkerd_io_proxy_job- action: labeldropregex: __meta_kubernetes_pod_label_linkerd_io_proxy_job# __meta_kubernetes_pod_label_linkerd_io_proxy_deployment=foo =># deployment=foo- action: labelmapregex: __meta_kubernetes_pod_label_linkerd_io_proxy_(.+)# drop all labels that we just made copies of in the previous labelmap- action: labeldropregex: __meta_kubernetes_pod_label_linkerd_io_proxy_(.+)# __meta_kubernetes_pod_label_linkerd_io_foo=bar =># foo=bar- action: labelmapregex: __meta_kubernetes_pod_label_linkerd_io_(.+)# Copy all pod labels to tmp labels- action: labelmapregex: __meta_kubernetes_pod_label_(.+)replacement: __tmp_pod_label_$1# Take `linkerd_io_` prefixed labels and copy them without the prefix- action: labelmapregex: __tmp_pod_label_linkerd_io_(.+)replacement: __tmp_pod_label_$1# Drop the `linkerd_io_` originals- action: labeldropregex: __tmp_pod_label_linkerd_io_(.+)# Copy tmp labels into real labels- action: labelmapregex: __tmp_pod_label_(.+)
The running configuration of the builtin prometheus can be used as a reference.
kubectl -n linkerd-viz get configmap prometheus-config -o yaml
Linkerd-Viz Extension Configuration
Linkerd’s viz extension components like metrics-api, etc depend on the Prometheus instance to power the dashboard and CLI.
The prometheusUrl field gives you a single place through which all these components can be configured to an external Prometheus URL. This is allowed both through the CLI and Helm.
CLI
This can be done by passing a file with the above field to the values flag, which is available through linkerd viz install command.
prometheusUrl: existing-prometheus.xyz:9090
Once applied, this configuration is not persistent across installs. The same has to be passed again by the user during re-installs, upgrades, etc.
When using an external Prometheus and configuring the prometheusUrl field, Linkerd’s Prometheus will still be included in installation. If you wish to disable it, be sure to include the following configuration as well:
prometheus:enabled: false
Helm
The same configuration can be applied through values.yaml when using Helm. Once applied, Helm makes sure that the configuration is persistent across upgrades.
More information on installation through Helm can be found here
