Creating a Single Node M3 Cluster with Binaries

This guide shows how to install and configure M3, create a single-node cluster, and read and write metrics to it.

Deploying a single-node M3 cluster is a great way to experiment with M3 and get an idea of what it has to offer, but is not designed for production use. To run M3 in clustered mode, with a separate M3Coordinator read the clustered mode guide.

Prebuilt Binaries

M3 has pre-built binaries available for Linux and macOS. Download the latest release from GitHub.

Build From Source

Prerequisites

Build Source

  1. make m3dbnode

Start Binary

By default the binary configures a single M3 instance containing:

  • An M3DB storage instance for time series storage. It includes an embedded tag-based metrics index and an etcd server for storing the cluster topology and runtime configuration.
  • An M3Coordinator instance for writing and querying tagged metrics, as well as managing cluster topology and runtime configuration.

It exposes three ports:

  • 7201 to manage the cluster topology, you make most API calls to this endpoint
  • 7203 for Prometheus to scrape the metrics produced by M3DB and M3Coordinator

The command below starts the node using the specified configuration file.

Download the example configuration file.

  1. ./m3dbnode -f /{FILE_LOCATION}/m3dbnode-local-etcd.yml

Depending on your operating system setup, you might need to prefix the command with sudo.

  1. ./bin/m3dbnode -f ./src/dbnode/config/m3dbnode-local-etcd.yml

Depending on your operating system setup, you might need to prefix the command with sudo.

Docker pull and run

When running the command above on macOS you may see errors about “too many open files.” To fix this in your current terminal, use ulimit to increase the upper limit, for example ulimit -n 10240.

Configuration

This example uses this sample configuration file by default.

The file groups configuration into coordinator or db sections that represent the M3Coordinator and M3DB instances of single-node cluster.

You can find more information on configuring M3DB in the operational guides section.

Organizing Data with Placements and Namespaces

A time series database (TSDBs) typically consist of one node (or instance) to store metrics data. This setup is simple to use but has issues with scalability over time as the quantity of metrics data written and read increases.

As a distributed TSDB, M3 helps solve this problem by spreading metrics data, and demand for that data, across multiple nodes in a cluster. M3 does this by splitting data into segments that match certain criteria (such as above a certain value) across nodes into shards.

If you’ve worked with a distributed database before, then these concepts are probably familiar to you, but M3 uses different terminology to represent some concepts.

  • Every cluster has one placement that maps shards to nodes in the cluster.
  • A cluster can have 0 or more namespaces that are similar conceptually to tables in other databases, and each node serves every namespace for the shards it owns.

For example, if the cluster placement states that node A owns shards 1, 2, and 3, then node A owns shards 1, 2, 3 for all configured namespaces in the cluster. Each namespace has its own configuration options, including a name and retention time for the data.

Create a Placement and Namespace

This quickstart uses the http://localhost:7201/api/v1/database/create endpoint that creates a namespace, and the placement if it doesn’t already exist based on the type argument.

You can create placements and namespaces separately if you need more control over their settings.

In another terminal, use the following command.

  1. #!/bin/bash
  2. curl -X POST http://localhost:7201/api/v1/database/create -d '{
  3. "type": "local",
  4. "namespaceName": "default",
  5. "retentionTime": "12h"
  6. }' | jq .
  1. {
  2. "namespace": {
  3. "registry": {
  4. "namespaces": {
  5. "default": {
  6. "bootstrapEnabled": true,
  7. "flushEnabled": true,
  8. "writesToCommitLog": true,
  9. "cleanupEnabled": true,
  10. "repairEnabled": false,
  11. "retentionOptions": {
  12. "retentionPeriodNanos": "43200000000000",
  13. "blockSizeNanos": "1800000000000",
  14. "bufferFutureNanos": "120000000000",
  15. "bufferPastNanos": "600000000000",
  16. "blockDataExpiry": true,
  17. "blockDataExpiryAfterNotAccessPeriodNanos": "300000000000",
  18. "futureRetentionPeriodNanos": "0"
  19. },
  20. "snapshotEnabled": true,
  21. "indexOptions": {
  22. "enabled": true,
  23. "blockSizeNanos": "1800000000000"
  24. },
  25. "schemaOptions": null,
  26. "coldWritesEnabled": false,
  27. "runtimeOptions": null
  28. }
  29. }
  30. }
  31. },
  32. "placement": {
  33. "placement": {
  34. "instances": {
  35. "m3db_local": {
  36. "id": "m3db_local",
  37. "isolationGroup": "local",
  38. "zone": "embedded",
  39. "weight": 1,
  40. "endpoint": "127.0.0.1:9000",
  41. "shards": [
  42. {
  43. "id": 0,
  44. "state": "INITIALIZING",
  45. "sourceId": "",
  46. "cutoverNanos": "0",
  47. "cutoffNanos": "0"
  48. },
  49. {
  50. "id": 63,
  51. "state": "INITIALIZING",
  52. "sourceId": "",
  53. "cutoverNanos": "0",
  54. "cutoffNanos": "0"
  55. }
  56. ],
  57. "shardSetId": 0,
  58. "hostname": "localhost",
  59. "port": 9000,
  60. "metadata": {
  61. "debugPort": 0
  62. }
  63. }
  64. },
  65. "replicaFactor": 1,
  66. "numShards": 64,
  67. "isSharded": true,
  68. "cutoverTime": "0",
  69. "isMirrored": false,
  70. "maxShardSetId": 0
  71. },
  72. "version": 0
  73. }
  74. }

Placement initialization can take a minute or two. Once all the shards have the AVAILABLE state, the node has finished bootstrapping, and you should see the following messages in the node console output.

  1. {"level":"info","ts":1598367624.0117292,"msg":"bootstrap marking all shards as bootstrapped","namespace":"default","namespace":"default","numShards":64}
  2. {"level":"info","ts":1598367624.0301404,"msg":"bootstrap index with bootstrapped index segments","namespace":"default","numIndexBlocks":0}
  3. {"level":"info","ts":1598367624.0301914,"msg":"bootstrap success","numShards":64,"bootstrapDuration":0.049208827}
  4. {"level":"info","ts":1598367624.03023,"msg":"bootstrapped"}

You can check on the status by calling the http://localhost:7201/api/v1/services/m3db/placement endpoint:

  1. curl http://localhost:7201/api/v1/services/m3db/placement | jq .
  1. {
  2. "placement": {
  3. "instances": {
  4. "m3db_local": {
  5. "id": "m3db_local",
  6. "isolationGroup": "local",
  7. "zone": "embedded",
  8. "weight": 1,
  9. "endpoint": "127.0.0.1:9000",
  10. "shards": [
  11. {
  12. "id": 0,
  13. "state": "AVAILABLE",
  14. "sourceId": "",
  15. "cutoverNanos": "0",
  16. "cutoffNanos": "0"
  17. },
  18. {
  19. "id": 63,
  20. "state": "AVAILABLE",
  21. "sourceId": "",
  22. "cutoverNanos": "0",
  23. "cutoffNanos": "0"
  24. }
  25. ],
  26. "shardSetId": 0,
  27. "hostname": "localhost",
  28. "port": 9000,
  29. "metadata": {
  30. "debugPort": 0
  31. }
  32. }
  33. },
  34. "replicaFactor": 1,
  35. "numShards": 64,
  36. "isSharded": true,
  37. "cutoverTime": "0",
  38. "isMirrored": false,
  39. "maxShardSetId": 0
  40. },
  41. "version": 2
  42. }

Read more about the bootstrapping process.

Ready a Namespace

Once a namespace has finished bootstrapping, you must mark it as ready before receiving traffic by using the http://localhost:7201/api/v1/services/m3db/namespace/ready.

  1. #!/bin/bash
  2. curl -X POST http://localhost:7201/api/v1/services/m3db/namespace/ready -d '{
  3. "name": "default"
  4. }' | jq .
  1. {
  2. "ready": true
  3. }

View Details of a Namespace

You can also view the attributes of all namespaces by calling the http://localhost:7201/api/v1/services/m3db/namespace endpoint

  1. curl http://localhost:7201/api/v1/services/m3db/namespace | jq .

Add ?debug=1 to the request to convert nano units in the output into standard units.

  1. {
  2. "registry": {
  3. "namespaces": {
  4. "default": {
  5. "bootstrapEnabled": true,
  6. "flushEnabled": true,
  7. "writesToCommitLog": true,
  8. "cleanupEnabled": true,
  9. "repairEnabled": false,
  10. "retentionOptions": {
  11. "retentionPeriodNanos": "43200000000000",
  12. "blockSizeNanos": "1800000000000",
  13. "bufferFutureNanos": "120000000000",
  14. "bufferPastNanos": "600000000000",
  15. "blockDataExpiry": true,
  16. "blockDataExpiryAfterNotAccessPeriodNanos": "300000000000",
  17. "futureRetentionPeriodNanos": "0"
  18. },
  19. "snapshotEnabled": true,
  20. "indexOptions": {
  21. "enabled": true,
  22. "blockSizeNanos": "1800000000000"
  23. },
  24. "schemaOptions": null,
  25. "coldWritesEnabled": false,
  26. "runtimeOptions": null
  27. }
  28. }
  29. }
  30. }

Writing and Querying Metrics

Writing Metrics

M3 supports ingesting statsd and Prometheus formatted metrics.

This quickstart focuses on Prometheus metrics which consist of a value, a timestamp, and tags to bring context and meaning to the metric.

You can write metrics using one of two endpoints:

For this quickstart, use the http://localhost:7201/api/v1/json/write endpoint to write a tagged metric to M3 with the following data in the request body, all fields are required:

  • tags: An object of at least one name/value pairs
  • timestamp: The UNIX timestamp for the data
  • value: The value for the data, can be of any type

The examples below use __name__ as the name for one of the tags, which is a Prometheus reserved tag that allows you to query metrics using the value of the tag to filter results.

Label names may contain ASCII letters, numbers, underscores, and Unicode characters. They must match the regex [a-zA-Z_][a-zA-Z0-9_]*. Label names beginning with __ are reserved for internal use. Read more in the Prometheus documentation.

  1. #!/bin/bash
  2. curl -X POST http://localhost:7201/api/v1/json/write -d '{
  3. "tags":
  4. {
  5. "__name__": "third_avenue",
  6. "city": "new_york",
  7. "checkout": "1"
  8. },
  9. "timestamp": '\"$(date "+%s")\"',
  10. "value": 3347.26
  11. }'
  1. #!/bin/bash
  2. curl -X POST http://localhost:7201/api/v1/json/write -d '{
  3. "tags":
  4. {
  5. "__name__": "third_avenue",
  6. "city": "new_york",
  7. "checkout": "1"
  8. },
  9. "timestamp": '\"$(date "+%s")\"',
  10. "value": 5347.26
  11. }'
  1. #!/bin/bash
  2. curl -X POST http://localhost:7201/api/v1/json/write -d '{
  3. "tags":
  4. {
  5. "__name__": "third_avenue",
  6. "city": "new_york",
  7. "checkout": "1"
  8. },
  9. "timestamp": '\"$(date "+%s")\"',
  10. "value": 7347.26
  11. }'

Querying metrics

M3 supports three query engines: Prometheus (default), Graphite, and the M3 Query Engine.

This quickstart uses Prometheus as the query engine, and you have access to all the features of PromQL queries.

To query metrics, use the http://localhost:7201/api/v1/query\_range endpoint with the following data in the request body, all fields are required:

  • query: A PromQL query
  • start: Timestamp in RFC3339Nano of start range for results
  • end: Timestamp in RFC3339Nano of end range for results
  • step: A duration or float of the query resolution, the interval between results in the timespan between start and end.

Below are some examples using the metrics written above.

Return results in past 45 seconds

  1. curl -X "POST" -G "http://localhost:7201/api/v1/query_range" \
  2. -d "query=third_avenue" \
  3. -d "start=$(date "+%s" -d "45 seconds ago")" \
  4. -d "end=$( date +%s )" \
  5. -d "step=5s" | jq .
  1. curl -X "POST" -G "http://localhost:7201/api/v1/query_range" \
  2. -d "query=third_avenue" \
  3. -d "start=$( date -v -45S +%s )" \
  4. -d "end=$( date +%s )" \
  5. -d "step=5s" | jq .
  1. {
  2. "status": "success",
  3. "data": {
  4. "resultType": "matrix",
  5. "result": [
  6. {
  7. "metric": {
  8. "__name__": "third_avenue",
  9. "checkout": "1",
  10. "city": "new_york"
  11. },
  12. "values": [
  13. [
  14. 1610746220,
  15. "3347.26"
  16. ],
  17. [
  18. 1610746220,
  19. "5347.26"
  20. ],
  21. [
  22. 1610746220,
  23. "7347.26"
  24. ]
  25. ]
  26. }
  27. ]
  28. }
  29. }

Values above a certain number

  1. curl -X "POST" -G "http://localhost:7201/api/v1/query_range" \
  2. -d "query=third_avenue > 6000" \
  3. -d "start=$(date "+%s" -d "45 seconds ago")" \
  4. -d "end=$( date +%s )" \
  5. -d "step=5s" | jq .
  1. curl -X "POST" -G "http://localhost:7201/api/v1/query_range" \
  2. -d "query=third_avenue > 6000" \
  3. -d "start=$(date -v -45S "+%s")" \
  4. -d "end=$( date +%s )" \
  5. -d "step=5s" | jq .
  1. {
  2. "status": "success",
  3. "data": {
  4. "resultType": "matrix",
  5. "result": [
  6. {
  7. "metric": {
  8. "__name__": "third_avenue",
  9. "checkout": "1",
  10. "city": "new_york"
  11. },
  12. "values": [
  13. [
  14. 1610746220,
  15. "7347.26"
  16. ]
  17. ]
  18. }
  19. ]
  20. }
  21. }