Observability Best Practices

Using Prometheus for production-scale monitoring

The recommended approach for production-scale monitoring of Istio meshes with Prometheus is to use hierarchical federation in combination with a collection of recording rules.

In default deployments of Istio, a deployment of Prometheus is provided for collecting metrics generated for all mesh traffic. This deployment of Prometheus is intentionally deployed with a very short retention window (6 hours). The default Prometheus deployment is also configured to collect metrics from each Envoy proxy running in the mesh, augmenting each metric with a set of labels about their origin (instance, pod, and namespace).

While the default configuration is well-suited for small clusters and monitoring for short time horizons, it is not suitable for large-scale meshes or monitoring over a period of days or weeks. In particular, the introduced labels can increase metrics cardinality, requiring a large amount of storage. And, when trying to identify trends and differences in traffic over time, access to historical data can be paramount.

Architecture for production monitoring of Istio using Prometheus.

Production-scale Istio monitoring with Istio

Workload-level aggregation via recording rules

In order to aggregate metrics across instances and pods, update the default Prometheus configuration with the following recording rules:

  1. groups:
  2. - name: "istio.recording-rules"
  3. interval: 5s
  4. rules:
  5. - record: "workload:istio_requests_total"
  6. expr: |
  7. sum without(instance, namespace, pod) (istio_requests_total)
  8. - record: "workload:istio_request_duration_milliseconds_count"
  9. expr: |
  10. sum without(instance, namespace, pod) (istio_request_duration_milliseconds_count)
  11. - record: "workload:istio_request_duration_milliseconds_sum"
  12. expr: |
  13. sum without(instance, namespace, pod) (istio_request_duration_milliseconds_sum)
  14. - record: "workload:istio_request_duration_milliseconds_bucket"
  15. expr: |
  16. sum without(instance, namespace, pod) (istio_request_duration_milliseconds_bucket)
  17. - record: "workload:istio_request_bytes_count"
  18. expr: |
  19. sum without(instance, namespace, pod) (istio_request_bytes_count)
  20. - record: "workload:istio_request_bytes_sum"
  21. expr: |
  22. sum without(instance, namespace, pod) (istio_request_bytes_sum)
  23. - record: "workload:istio_request_bytes_bucket"
  24. expr: |
  25. sum without(instance, namespace, pod) (istio_request_bytes_bucket)
  26. - record: "workload:istio_response_bytes_count"
  27. expr: |
  28. sum without(instance, namespace, pod) (istio_response_bytes_count)
  29. - record: "workload:istio_response_bytes_sum"
  30. expr: |
  31. sum without(instance, namespace, pod) (istio_response_bytes_sum)
  32. - record: "workload:istio_response_bytes_bucket"
  33. expr: |
  34. sum without(instance, namespace, pod) (istio_response_bytes_bucket)
  35. - record: "workload:istio_tcp_connections_opened_total"
  36. expr: |
  37. sum without(instance, namespace, pod) (istio_tcp_connections_opened_total)
  38. - record: "workload:istio_tcp_connections_closed_total"
  39. expr: |
  40. sum without(instance, namespace, pod) (istio_tcp_connections_closed_total)
  41. - record: "workload:istio_tcp_sent_bytes_total_count"
  42. expr: |
  43. sum without(instance, namespace, pod) (istio_tcp_sent_bytes_total_count)
  44. - record: "workload:istio_tcp_sent_bytes_total_sum"
  45. expr: |
  46. sum without(instance, namespace, pod) (istio_tcp_sent_bytes_total_sum)
  47. - record: "workload:istio_tcp_sent_bytes_total_bucket"
  48. expr: |
  49. sum without(instance, namespace, pod) (istio_tcp_sent_bytes_total_bucket)
  50. - record: "workload:istio_tcp_received_bytes_total_count"
  51. expr: |
  52. sum without(instance, namespace, pod) (istio_tcp_received_bytes_total_count)
  53. - record: "workload:istio_tcp_received_bytes_total_sum"
  54. expr: |
  55. sum without(instance, namespace, pod) (istio_tcp_received_bytes_total_sum)
  56. - record: "workload:istio_tcp_received_bytes_total_bucket"
  57. expr: |
  58. sum without(instance, namespace, pod) (istio_tcp_received_bytes_total_bucket)
  1. apiVersion: monitoring.coreos.com/v1
  2. kind: PrometheusRule
  3. metadata:
  4. name: istio-metrics-aggregation
  5. labels:
  6. app.kubernetes.io/name: istio-prometheus
  7. spec:
  8. groups:
  9. - name: "istio.metricsAggregation-rules"
  10. interval: 5s
  11. rules:
  12. - record: "workload:istio_requests_total"
  13. expr: "sum without(instance, namespace, pod) (istio_requests_total)"
  14. - record: "workload:istio_request_duration_milliseconds_count"
  15. expr: "sum without(instance, namespace, pod) (istio_request_duration_milliseconds_count)"
  16. - record: "workload:istio_request_duration_milliseconds_sum"
  17. expr: "sum without(instance, namespace, pod) (istio_request_duration_milliseconds_sum)"
  18. - record: "workload:istio_request_duration_milliseconds_bucket"
  19. expr: "sum without(instance, namespace, pod) (istio_request_duration_milliseconds_bucket)"
  20. - record: "workload:istio_request_bytes_count"
  21. expr: "sum without(instance, namespace, pod) (istio_request_bytes_count)"
  22. - record: "workload:istio_request_bytes_sum"
  23. expr: "sum without(instance, namespace, pod) (istio_request_bytes_sum)"
  24. - record: "workload:istio_request_bytes_bucket"
  25. expr: "sum without(instance, namespace, pod) (istio_request_bytes_bucket)"
  26. - record: "workload:istio_response_bytes_count"
  27. expr: "sum without(instance, namespace, pod) (istio_response_bytes_count)"
  28. - record: "workload:istio_response_bytes_sum"
  29. expr: "sum without(instance, namespace, pod) (istio_response_bytes_sum)"
  30. - record: "workload:istio_response_bytes_bucket"
  31. expr: "sum without(instance, namespace, pod) (istio_response_bytes_bucket)"
  32. - record: "workload:istio_tcp_connections_opened_total"
  33. expr: "sum without(instance, namespace, pod) (istio_tcp_connections_opened_total)"
  34. - record: "workload:istio_tcp_connections_closed_total"
  35. expr: "sum without(instance, namespace, pod) (istio_tcp_connections_closed_total)"
  36. - record: "workload:istio_tcp_sent_bytes_total_count"
  37. expr: "sum without(instance, namespace, pod) (istio_tcp_sent_bytes_total_count)"
  38. - record: "workload:istio_tcp_sent_bytes_total_sum"
  39. expr: "sum without(instance, namespace, pod) (istio_tcp_sent_bytes_total_sum)"
  40. - record: "workload:istio_tcp_sent_bytes_total_bucket"
  41. expr: "sum without(instance, namespace, pod) (istio_tcp_sent_bytes_total_bucket)"
  42. - record: "workload:istio_tcp_received_bytes_total_count"
  43. expr: "sum without(instance, namespace, pod) (istio_tcp_received_bytes_total_count)"
  44. - record: "workload:istio_tcp_received_bytes_total_sum"
  45. expr: "sum without(instance, namespace, pod) (istio_tcp_received_bytes_total_sum)"
  46. - record: "workload:istio_tcp_received_bytes_total_bucket"
  47. expr: "sum without(instance, namespace, pod) (istio_tcp_received_bytes_total_bucket)"

The recording rules above only aggregate across pods and instances. They still preserve the full set of Istio Standard Metrics, including all Istio dimensions. While this will help with controlling metrics cardinality via federation, you may want to further optimize the recording rules to match your existing dashboards, alerts, and ad-hoc queries.

For more information on tailoring your recording rules, see the section on Optimizing metrics collection with recording rules.

Federation using workload-level aggregated metrics

To establish Prometheus federation, modify the configuration of your production-ready deployment of Prometheus to scrape the federation endpoint of the Istio Prometheus.

Add the following job to your configuration:

  1. - job_name: 'istio-prometheus'
  2. honor_labels: true
  3. metrics_path: '/federate'
  4. kubernetes_sd_configs:
  5. - role: pod
  6. namespaces:
  7. names: ['istio-system']
  8. metric_relabel_configs:
  9. - source_labels: [__name__]
  10. regex: 'workload:(.*)'
  11. target_label: __name__
  12. action: replace
  13. params:
  14. 'match[]':
  15. - '{__name__=~"workload:(.*)"}'
  16. - '{__name__=~"pilot(.*)"}'

If you are using the Prometheus Operator, use the following configuration instead:

  1. apiVersion: monitoring.coreos.com/v1
  2. kind: ServiceMonitor
  3. metadata:
  4. name: istio-federation
  5. labels:
  6. app.kubernetes.io/name: istio-prometheus
  7. spec:
  8. namespaceSelector:
  9. matchNames:
  10. - istio-system
  11. selector:
  12. matchLabels:
  13. app: prometheus
  14. endpoints:
  15. - interval: 30s
  16. scrapeTimeout: 30s
  17. params:
  18. 'match[]':
  19. - '{__name__=~"workload:(.*)"}'
  20. - '{__name__=~"pilot(.*)"}'
  21. path: /federate
  22. targetPort: 9090
  23. honorLabels: true
  24. metricRelabelings:
  25. - sourceLabels: ["__name__"]
  26. regex: 'workload:(.*)'
  27. targetLabel: "__name__"
  28. action: replace

The key to the federation configuration is matching on the job in the Istio-deployed Prometheus that is collecting Istio Standard Metrics and renaming any metrics collected by removing the prefix used in the workload-level recording rules (workload:). This will allow existing dashboards and queries to seamlessly continue working when pointed at the production Prometheus instance (and away from the Istio instance).

You can also include additional metrics (for example, envoy, go, etc.) when setting up federation.

Control plane metrics are also collected and federated up to the production Prometheus.

Optimizing metrics collection with recording rules

Beyond just using recording rules to aggregate over pods and instances, you may want to use recording rules to generate aggregated metrics tailored specifically to your existing dashboards and alerts. Optimizing your collection in this manner can result in large savings in resource consumption in your production instance of Prometheus, in addition to faster query performance.

For example, imagine a custom monitoring dashboard that used the following Prometheus queries:

  • Total rate of requests averaged over the past minute by destination service name and namespace

    1. sum(irate(istio_requests_total{reporter="source"}[1m]))
    2. by (
    3. destination_canonical_service,
    4. destination_workload_namespace
    5. )
  • P95 client latency averaged over the past minute by source and destination service names and namespace

    1. histogram_quantile(0.95,
    2. sum(irate(istio_request_duration_milliseconds_bucket{reporter="source"}[1m]))
    3. by (
    4. destination_canonical_service,
    5. destination_workload_namespace,
    6. source_canonical_service,
    7. source_workload_namespace,
    8. le
    9. )
    10. )

The following set of recording rules could be added to the Istio Prometheus configuration, using the istio prefix to make identifying these metrics for federation simple.

  1. groups:
  2. - name: "istio.recording-rules"
  3. interval: 5s
  4. rules:
  5. - record: "istio:istio_requests:by_destination_service:rate1m"
  6. expr: |
  7. sum(irate(istio_requests_total{reporter="destination"}[1m]))
  8. by (
  9. destination_canonical_service,
  10. destination_workload_namespace
  11. )
  12. - record: "istio:istio_request_duration_milliseconds_bucket:p95:rate1m"
  13. expr: |
  14. histogram_quantile(0.95,
  15. sum(irate(istio_request_duration_milliseconds_bucket{reporter="source"}[1m]))
  16. by (
  17. destination_canonical_service,
  18. destination_workload_namespace,
  19. source_canonical_service,
  20. source_workload_namespace,
  21. le
  22. )
  23. )

The production instance of Prometheus would then be updated to federate from the Istio instance with:

  • match clause of {__name__=~"istio:(.*)"}

  • metric relabeling config with: regex: "istio:(.*)"

The original queries would then be replaced with:

  • istio_requests:by_destination_service:rate1m

  • avg(istio_request_duration_milliseconds_bucket:p95:rate1m)

A detailed write-up on metrics collection optimization in production at AutoTrader provides a more fleshed out example of aggregating directly to the queries that power dashboards and alerts.