Write data to Promscale using the remote-write API

Promscale provides a remote-write endpoint for ingesting data. The endpoint is compatible with Prometheus. You can also push data to Promscale from a custom application.

To configure Prometheus for remote writing to Promscale, see the section on writing from Prometheus. This section covers how to use the remote-write endpoint with a custom application.

Overview of remote-write endpoint

The remote-write endpoint is located at http://<PROMSCALE_URL>:<PORT>/write. The URL and port depend on your configuration. For example, if you’re running locally on the default port, the endpoint is located at http://localhost:9201/write.

Only the HTTP POST method is supported.

The default data format is Protobuf, but JSON streaming and text are also supported.

For more information on endpoint details, see the Prometheus documentation.

Formats

This section covers formats for sending data to the Promscale endpoint, including:

General format for time-series data

A Prometheus time series must contain both labels and samples.

Labels are a collection of label name/value pairs with unique label names. Every request must contain a label named __name__ which cannot have an empty value. All other labels are optional metadata.

A sample must contain two things:

  • An integer timestamp in milliseconds since the UNIX epoch. In other words, the number of milliseconds since 1970-01-01 00:00:00 UTC, excluding leap seconds. This should be represented as required by Go’s ParseInt function.
  • A floating point number that represents the actual measured value.

Protobuf format

Protobuf is the default data format. It is slightly more complex than the JSON streaming format, but it is usually more performant, because the payload is smaller. We recommend this format if you’re building a system to ingest a lot of data continuously.

Using the Promscale endpoint with the Protobuf format

  1. Fetch the protocol buffer definition files from the Prometheus GitHub repository.
  2. Compile the definitions into the structures of the programming language you’re using. For instructions, see the Google Protobuf tutorials.
  3. Use the structures to construct requests to send to the Promscale write endpoint. For more information, see this sample request written in Go.

Protobuf example in Go

Before you begin, you need the Protobuf definition files, available in the Prometheus codebase. These definitions must be compiled into the necessary classes for payload serialization.

Since Prometheus is written in the Go language, you can directly import the pre-generated classes from the prompb package in the Prometheus repository. This gives you all the necessary classes to create a write request.

Then, you can:

  • Create a WriteRequest object
  • Preload it with metrics data
  • Serialize it
  • Send it as an HTTP request

A simplified main.go file for these steps looks like this:

  1. package main
  2. import (
  3. "bytes"
  4. "net/http"
  5. "github.com/gogo/protobuf/proto"
  6. "github.com/golang/snappy"
  7. )
  8. func main() {
  9. // Create the WriteRequest object with metric data populated.
  10. wr := &WriteRequest{
  11. Timeseries: []TimeSeries{
  12. TimeSeries{
  13. Labels: []Label{
  14. {
  15. Name: "__name__",
  16. Value: "foo",
  17. },
  18. },
  19. Samples: []Sample{
  20. {
  21. Timestamp: 1577836800000,
  22. Value: 100.0,
  23. },
  24. },
  25. },
  26. },
  27. }
  28. // Marshal the data into a byte slice using the protobuf library.
  29. data, err := proto.Marshal(wr)
  30. if err != nil {
  31. panic(err)
  32. }
  33. // Encode the content into snappy encoding.
  34. encoded := snappy.Encode(nil, data)
  35. body := bytes.NewReader(encoded)
  36. // Create an HTTP request from the body content and set necessary parameters.
  37. req, err := http.NewRequest("POST", "http://localhost:9201/write", body)
  38. if err != nil {
  39. panic(err)
  40. }
  41. // Set the required HTTP header content.
  42. req.Header.Set("Content-Type", "application/x-protobuf")
  43. req.Header.Set("Content-Encoding", "snappy")
  44. req.Header.Set("X-Prometheus-Remote-Write-Version", "0.1.0")
  45. // Send request to Promscale.
  46. resp, err := http.DefaultClient.Do(req)
  47. if err != nil {
  48. panic(err)
  49. }
  50. defer resp.Body.Close()
  51. ...

JSON streaming format

The JSON streaming format makes it easier to ingest metric data from third-party tools. It was introduced in Promscale and is not part of the Prometheus remote-write specification. JSON streaming is slightly less efficient than Protobuf.

This format uses a JSON stream of objects with two fields:

  • labels: A JSON object in which all the labels are represented as fields
  • samples: A JSON array which contains arrays of timestamp-value sets

A simple payload looks like this:

  1. {
  2. "labels":{"__name__": "cpu_usage", "namespace":"dev", "node": "brain"},
  3. "samples":[
  4. [1577836800000, 100],
  5. [1577836801000, 99],
  6. [1577836802000, 98],
  7. ...
  8. ]
  9. }
  10. {
  11. "labels":{"__name__": "cpu_usage", "namespace":"prod", "node": "brain"},
  12. "samples":[
  13. [1577836800000, 98],
  14. [1577836801000, 99],
  15. [1577836802000, 100],
  16. ...
  17. ]
  18. }
  19. ...

In the labels object, fields represent the label name. Field values contain the label value.

The samples array must be in this format:

  • Each array can only contain two values.
  • The first value is a UNIX timestamp in milliseconds. It must be an integer.
  • The second value is a floating point number which represents the metric value for that timestamp.

To send this data to Promscale, send an HTTP POST request. Set the request body to the JSON payload. Also set these HTTP headers:

  • Content-Type: application/json.
  • Content-Encoding: Set to snappy if using snappy compression. Otherwise, leave unset.

Here’s an example request sent with curl:

  1. curl --header "Content-Type: application/json" \
  2. --request POST \
  3. --data '{"labels":{"__name__":"foo"},"samples":[[1577836800000, 100]]}' \
  4. "http://localhost:9201/write"

Here’s an example request using snappy encoding:

  1. curl --header "Content-Type: application/json" \
  2. --header "Content-Encoding: snappy" \
  3. --request POST \
  4. --data-binary "@snappy-payload.sz" \
  5. "http://localhost:9201/write"

Prometheus and OpenMetric text format

The text format makes it easier to ingest samples using a push model. Metrics exposed in this format can be directly forwarded to Promscale. Promscale parses the text file and stores the parsed data.

For more details about the format, see the:

A simple payload might look like this:

  1. # HELP http_requests_total The total number of HTTP requests.
  2. # TYPE http_requests_total counter
  3. http_requests_total{method="post",code="200"} 1027 1395066363000
  4. http_requests_total{method="post",code="400"} 3 1395066363000
  5. # A histogram, which has a pretty complex representation in the text format:
  6. # HELP http_request_duration_seconds A histogram of the request duration.
  7. # TYPE http_request_duration_seconds histogram
  8. http_request_duration_seconds_bucket{le="0.05"} 24054
  9. http_request_duration_seconds_bucket{le="0.1"} 33444
  10. http_request_duration_seconds_bucket{le="0.2"} 100392
  11. http_request_duration_seconds_bucket{le="0.5"} 129389
  12. http_request_duration_seconds_bucket{le="1"} 133988
  13. http_request_duration_seconds_bucket{le="+Inf"} 144320
  14. http_request_duration_seconds_sum 53423
  15. http_request_duration_seconds_count 144320
  16. ...

Each sample entry is a single line in this format:

  • Metric name

  • OptionalCollection label name and value, in curly braces, immediately following the metric name

  • Floating point number that represents the actual measured value

  • OptionalAn integer timestamp in milliseconds since the UNIX epoch. In other words, the number of milliseconds since `1970-01-01 00:00:00 UTC`, excluding leap seconds. This should be represented as required by Go’s [ParseInt][parseint] function.

    If the timestamp is omitted, request time is used in its place.

Send the payload in the body of an HTTP POST request. Set these header values:

  • Content-Type: Either text/plain or application/openmetrics-text, depending on which format you use.
  • Content-Encoding: Set to snappy if using snappy compression. Otherwise, leave unset.

Here’s an example request sent with curl:

  1. curl --header "Content-Type: text/plain" \
  2. --request POST \
  3. --data 'test_metric 1\nanother_metric 2' \
  4. "http://localhost:9201/write"

Here’s an example request using snappy encoding:

  1. curl --header "Content-Type: application/openmetrics-text" \
  2. --header "Content-Encoding: snappy" \
  3. --request POST \
  4. --data-binary "@snappy-payload.sz" \
  5. "http://localhost:9201/write"