metrics stage

The metrics stage is an action stage that allows for defining and updatingmetrics based on data from the extracted map. Note that created metrics are notpushed to Loki and are instead exposed via Promtail’s /metrics endpoint.Prometheus should be configured to scrape Promtail to be able to retrieve themetrics configured by this stage.

Schema

  1. # A map where the key is the name of the metric and the value is a specific
  2. # metric type.
  3. metrics:
  4. [<string>: [ <metric_counter> | <metric_gauge> | <metric_histogram> ] ...]

metric_counter

Defines a counter metric whose value only goes up.

  1. # The metric type. Must be Counter.
  2. type: Counter
  3. # Describes the metric.
  4. [description: <string>]
  5. # Key from the extracted data map to use for the mtric,
  6. # defaulting to the metric's name if not present.
  7. [source: <string>]
  8. config:
  9. # Filters down source data and only changes the metric
  10. # if the targeted value exactly matches the provided string.
  11. # If not present, all data will match.
  12. [value: <string>]
  13. # Must be either "inc" or "add" (case insensitive). If
  14. # inc is chosen, the metric value will increase by 1 for each
  15. # log line receieved that passed the filter. If add is chosen,
  16. # the extracted value most be convertible to a positive float
  17. # and its value will be added to the metric.
  18. action: <string>

metric_gauge

Defines a gauge metric whose value can go up or down.

  1. # The metric type. Must be Gauge.
  2. type: Gauge
  3. # Describes the metric.
  4. [description: <string>]
  5. # Key from the extracted data map to use for the mtric,
  6. # defaulting to the metric's name if not present.
  7. [source: <string>]
  8. config:
  9. # Filters down source data and only changes the metric
  10. # if the targeted value exactly matches the provided string.
  11. # If not present, all data will match.
  12. [value: <string>]
  13. # Must be either "set", "inc", "dec"," add", or "sub". If
  14. # add, set, or sub is chosen, the extracted value must be
  15. # convertible to a positive float. inc and dec will increment
  16. # or decrement the metric's value by 1 respectively.
  17. action: <string>

metric_histogram

Defines a histogram metric whose values are bucketed.

  1. # The metric type. Must be Histogram.
  2. type: Histogram
  3. # Describes the metric.
  4. [description: <string>]
  5. # Key from the extracted data map to use for the mtric,
  6. # defaulting to the metric's name if not present.
  7. [source: <string>]
  8. config:
  9. # Filters down source data and only changes the metric
  10. # if the targeted value exactly matches the provided string.
  11. # If not present, all data will match.
  12. [value: <string>]
  13. # Must be either "inc" or "add" (case insensitive). If
  14. # inc is chosen, the metric value will increase by 1 for each
  15. # log line receieved that passed the filter. If add is chosen,
  16. # the extracted value most be convertible to a positive float
  17. # and its value will be added to the metric.
  18. action: <string>
  19. # Holds all the numbers in which to bucket the metric.
  20. buckets:
  21. - <int>

Examples

Counter

  1. - metrics:
  2. log_lines_total:
  3. type: Counter
  4. description: "total number of log lines"
  5. source: time
  6. config:
  7. action: inc

This pipeline creates a log_lines_total counter that increments whenever theextracted map contains a key for time. Since every log entry has a timestamp,this is a good field to use to count every line. Notice that value is notdefined in the config section as we want to count every line and don’t need tofilter the value. Similarly, inc is used as the action because we want toincrement the counter by one rather than by using the value of time.

  1. - regex:
  2. expression: "^.*(?P<order_success>order successful).*$"
  3. - metrics:
  4. succesful_orders_total:
  5. type: Counter
  6. description: "log lines with the message `order successful`"
  7. source: order_success
  8. config:
  9. action: inc

This pipeline first tries to find order successful in the log line, extractingit as the order_success field in the extracted map. The metrics stage thencreates a metric called succesful_orders_total whose value only increases whenorder_success was found in the extracted map.

The result of this pipeline is a metric whose value only increases when a logline with the text order successful was scraped by Promtail.

  1. - regex:
  2. expression: "^.* order_status=(?P<order_status>.*?) .*$"
  3. - metrics:
  4. succesful_orders_total:
  5. type: Counter
  6. description: "successful orders"
  7. source: order_status
  8. config:
  9. value: success
  10. action: inc
  11. failed_orders_total:
  12. type: Counter
  13. description: "failed orders"
  14. source: order_status
  15. config:
  16. fail: fail
  17. action: inc

This pipeline first tries to find text in the format order_status=<value> inthe log line, pulling out the <value> into the extracted map with the keyorder_status.

The metric stages creates succesful_orders_total and failed_orders_totalmetrics that only increment when the value of order_status in the extractedmap is success or fail respectively.

Gauge

Gauge examples will be very similar to Counter examples with additional actionvalues.

  1. - regex:
  2. expression: "^.* retries=(?P<retries>\d+) .*$"
  3. - metrics:
  4. retries_total:
  5. type: Gauge
  6. description: "total retries"
  7. source: retries
  8. config:
  9. action: add

This pipeline first tries to find text in the format retries=<value> in thelog line, pulling out the <value> into the extracted map with the keyretries. Note that the regex only parses numbers for the value in retries.

The metrics stage then creates a Gauge whose current value will be added to thenumber in the retries field from the extracted map.

Histogram

  1. - metrics:
  2. http_response_time_seconds:
  3. type: Histogram
  4. description: "length of each log line"
  5. source: response_time
  6. config:
  7. buckets: [0.001,0.0025,0.005,0.010,0.025,0.050]

This pipeline creates a histogram that reads response_time from the extractedmap and places it into a bucket, both increasing the count of the bucket and thesum for that particular bucket.