Prometheus

Host metrics refer to the metrics collected from the operating system of the host where your applications are running. These metrics include CPU, memory, disk, and network usage. Understanding host metrics is crucial as it helps you identify potential problems or bottlenecks that could affect the overall performance of your applications.

In this tutorial, we will show you how to collect host metrics, send them to GreptimeDB and visualize them.

Create Service

To experience the full power of GreptimeCloud, you need to create a service which contains a database with authentication. Open the GreptimeCloud console, signup and login. Then click the New Service button and config the following:

  • Service Name: The name you want to describe your service.
  • Description: More information about your service.
  • Region: Select the region where the database is located.
  • Plan: Select the pricing plan you want to use.

Now create the service and we are ready to write some metrics to it.

Write Data

Prerequisites

Example

We will use node exporter to monitor the host system and send metrics to GreptimeDB via Prometheus.

To begin, create a new directory named quick-start-prometheus to host our project. Create a docker compose file named compose.yml and add the following:

yaml

  1. services:
  2. prometheus:
  3. image: prom/prometheus:latest
  4. container_name: prometheus
  5. depends_on:
  6. - node_exporter
  7. ports:
  8. - 9090:9090
  9. volumes:
  10. - ./prometheus-greptimedb.yml:/etc/prometheus/prometheus.yml:ro
  11. node_exporter:
  12. image: quay.io/prometheus/node-exporter:latest
  13. container_name: node_exporter
  14. ports:
  15. - 9100:9100
  16. command:
  17. - '--path.rootfs=/'

The configuration file above will start a Prometheus server and a node exporter. Next, create a new file named prometheus-greptimedb.yml and add the following:

yaml

  1. # my global config
  2. global:
  3. scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  4. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  5. # scrape_timeout is set to the global default (10s).
  6. # A scrape configuration containing exactly one endpoint to scrape:
  7. # Here it's Prometheus itself.
  8. scrape_configs:
  9. # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  10. - job_name: 'node'
  11. static_configs:
  12. - targets: ['node_exporter:9100']
  13. remote_write:
  14. - url: https://<host>/v1/prometheus/write?db=<dbname>
  15. basic_auth:
  16. username: <username>
  17. password: <password>

The configuration file above configures Prometheus to scrape metrics from the node exporter and send them to GreptimeDB. For the configration about <host>, <dbname>, <username>, and <password>, please refer to the Prometheus documentation in GreptimeDB or GreptimeCloud.

Finally, start the containers:

bash

  1. docker-compose up

The connection information can be found on the service page of GreptimeCloud console.

Visualize Data

Visualizing data in panels and monitoring metrics is important in a developer’s daily work. From the GreptimeCloud console, click on Open Prometheus Workbench, then click on + New Ruleset and Add Group. Name the group host-monitor and add panels.

To add panels for all the tables you’re concerned with, select a table and click on Add Panel one by one. Once you’ve added all the necessary panels, click on the Save button to save them. You can then view the panels in your daily work to monitor the metrics. Additionally, you can set up alert rules for the panels to be notified when the metrics exceed the threshold.