Getting started

This guide is a "Hello World"-style tutorial which shows how to install,configure, and use Prometheus in a simple example setup. You will download and runPrometheus locally, configure it to scrape itself and an example application,and then work with queries, rules, and graphs to make use of the collected timeseries data.

Downloading and running Prometheus

Download the latest release of Prometheus foryour platform, then extract and run it:

  1. tar xvfz prometheus-*.tar.gz
  2. cd prometheus-*

Before starting Prometheus, let's configure it.

Configuring Prometheus to monitor itself

Prometheus collects metrics from monitored targets by scraping metrics HTTPendpoints on these targets. Since Prometheus also exposes data in the samemanner about itself, it can also scrape and monitor its own health.

While a Prometheus server that collects only data about itself is not veryuseful in practice, it is a good starting example. Save the following basicPrometheus configuration as a file named prometheus.yml:

  1. global:
  2. scrape_interval: 15s # By default, scrape targets every 15 seconds.
  3. # Attach these labels to any time series or alerts when communicating with
  4. # external systems (federation, remote storage, Alertmanager).
  5. external_labels:
  6. monitor: 'codelab-monitor'
  7. # A scrape configuration containing exactly one endpoint to scrape:
  8. # Here it's Prometheus itself.
  9. scrape_configs:
  10. # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  11. - job_name: 'prometheus'
  12. # Override the global default and scrape targets from this job every 5 seconds.
  13. scrape_interval: 5s
  14. static_configs:
  15. - targets: ['localhost:9090']

For a complete specification of configuration options, see theconfiguration documentation.

Starting Prometheus

To start Prometheus with your newly created configuration file, change to thedirectory containing the Prometheus binary and run:

  1. # Start Prometheus.
  2. # By default, Prometheus stores its database in ./data (flag --storage.tsdb.path).
  3. ./prometheus --config.file=prometheus.yml

Prometheus should start up. You should also be able to browse to a status pageabout itself at localhost:9090. Give it a couple ofseconds to collect data about itself from its own HTTP metrics endpoint.

You can also verify that Prometheus is serving metrics about itself bynavigating to its metrics endpoint:localhost:9090/metrics

Using the expression browser

Let us try looking at some data that Prometheus has collected about itself. Touse Prometheus's built-in expression browser, navigate tohttp://localhost:9090/graph and choose the "Console" view within the "Graph"tab.

As you can gather from localhost:9090/metrics,one metric that Prometheus exports about itself is calledprometheus_target_interval_length_seconds (the actual amount of time betweentarget scrapes). Go ahead and enter this into the expression console:

  1. prometheus_target_interval_length_seconds

This should return a number of different time series (along with the latest valuerecorded for each), all with the metric nameprometheus_target_interval_length_seconds, but with different labels. Theselabels designate different latency percentiles and target group intervals.

If we were only interested in the 99th percentile latencies, we could use thisquery to retrieve that information:

  1. prometheus_target_interval_length_seconds{quantile="0.99"}

To count the number of returned time series, you could write:

  1. count(prometheus_target_interval_length_seconds)

For more about the expression language, see theexpression language documentation.

Using the graphing interface

To graph expressions, navigate to http://localhost:9090/graph and use the "Graph"tab.

For example, enter the following expression to graph the per-second rate of chunks being created in the self-scraped Prometheus:

  1. rate(prometheus_tsdb_head_chunks_created_total[1m])

Experiment with the graph range parameters and other settings.

Starting up some sample targets

Let us make this more interesting and start some example targets for Prometheusto scrape.

The Go client library includes an example which exports fictional RPC latenciesfor three services with different latency distributions.

Ensure you have the Go compiler installed andhave a working Go build environment (withcorrect GOPATH) set up.

Download the Go client library for Prometheus and run three of these exampleprocesses:

  1. # Fetch the client library code and compile example.
  2. git clone https://github.com/prometheus/client_golang.git
  3. cd client_golang/examples/random
  4. go get -d
  5. go build
  6. # Start 3 example targets in separate terminals:
  7. ./random -listen-address=:8080
  8. ./random -listen-address=:8081
  9. ./random -listen-address=:8082

You should now have example targets listening on http://localhost:8080/metrics,http://localhost:8081/metrics, and http://localhost:8082/metrics.

Configuring Prometheus to monitor the sample targets

Now we will configure Prometheus to scrape these new targets. Let's group allthree endpoints into one job called example-random. However, imagine that thefirst two endpoints are production targets, while the third one represents acanary instance. To model this in Prometheus, we can add several groups ofendpoints to a single job, adding extra labels to each group of targets. Inthis example, we will add the group="production" label to the first group oftargets, while adding group="canary" to the second.

To achieve this, add the following job definition to the scrape_configssection in your prometheus.yml and restart your Prometheus instance:

  1. scrape_configs:
  2. - job_name: 'example-random'
  3. # Override the global default and scrape targets from this job every 5 seconds.
  4. scrape_interval: 5s
  5. static_configs:
  6. - targets: ['localhost:8080', 'localhost:8081']
  7. labels:
  8. group: 'production'
  9. - targets: ['localhost:8082']
  10. labels:
  11. group: 'canary'

Go to the expression browser and verify that Prometheus now has informationabout time series that these example endpoints expose, such as therpc_durations_seconds metric.

Configure rules for aggregating scraped data into new time series

Though not a problem in our example, queries that aggregate over thousands oftime series can get slow when computed ad-hoc. To make this more efficient,Prometheus allows you to prerecord expressions into completely new persistedtime series via configured recording rules. Let's say we are interested inrecording the per-second rate of example RPCs(rpc_durations_seconds_count) averaged over all instances (butpreserving the job and service dimensions) as measured over a window of 5minutes. We could write this as:

  1. avg(rate(rpc_durations_seconds_count[5m])) by (job, service)

Try graphing this expression.

To record the time series resulting from this expression into a new metriccalled job_service:rpc_durations_seconds_count:avg_rate5m, create a filewith the following recording rule and save it as prometheus.rules.yml:

  1. groups:
  2. - name: example
  3. rules:
  4. - record: job_service:rpc_durations_seconds_count:avg_rate5m
  5. expr: avg(rate(rpc_durations_seconds_count[5m])) by (job, service)

To make Prometheus pick up this new rule, add a rule_files statement in your prometheus.yml. The config should nowlook like this:

  1. global:
  2. scrape_interval: 15s # By default, scrape targets every 15 seconds.
  3. evaluation_interval: 15s # Evaluate rules every 15 seconds.
  4. # Attach these extra labels to all timeseries collected by this Prometheus instance.
  5. external_labels:
  6. monitor: 'codelab-monitor'
  7. rule_files:
  8. - 'prometheus.rules.yml'
  9. scrape_configs:
  10. - job_name: 'prometheus'
  11. # Override the global default and scrape targets from this job every 5 seconds.
  12. scrape_interval: 5s
  13. static_configs:
  14. - targets: ['localhost:9090']
  15. - job_name: 'example-random'
  16. # Override the global default and scrape targets from this job every 5 seconds.
  17. scrape_interval: 5s
  18. static_configs:
  19. - targets: ['localhost:8080', 'localhost:8081']
  20. labels:
  21. group: 'production'
  22. - targets: ['localhost:8082']
  23. labels:
  24. group: 'canary'

Restart Prometheus with the new configuration and verify that a new time serieswith the metric name job_service:rpc_durations_seconds_count:avg_rate5mis now available by querying it through the expression browser or graphing it.