Monitoring and Alerting

Despite CockroachDB's various built-in safeguards against failure, it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention.

This page explains available monitoring tools and critical events and metrics to alert on.

Monitoring tools

Admin UI

The built-in Admin UI gives you essential metrics about a cluster's health, such as the number of live, dead, and suspect nodes, the number of unavailable ranges, and the queries per second and service latency across the cluster. The Admin UI is accessible from every node at http://<host>:<http-port&gt;, or http://<host>:8080 by default.

Accessing the Admin UI for a secure cluster

For each user who should have access to the Admin UI for a secure cluster, create a user with a password. On accessing the Admin UI, the users will see a Login screen, where they will need to enter their usernames and passwords.

Warning:
Because the Admin UI is built into CockroachDB, if a cluster becomes unavailable, most of the Admin UI becomes unavailable as well. Therefore, it's essential to plan additional methods of monitoring cluster health as described below.

Prometheus endpoint

Every node of a CockroachDB cluster exports granular timeseries metrics at http://<host>:<http-port>/_status/vars. The metrics are formatted for easy integration with Prometheus, an open source tool for storing, aggregating, and querying timeseries data, but the format is easy-to-parse and can be massaged to work with other third-party monitoring systems (e.g., Sysdig and Stackdriver).

For a tutorial on using Prometheus, see Monitor CockroachDB with Prometheus.

  1. $ curl http://localhost:8080/_status/vars
  1. # HELP gossip_infos_received Number of received gossip Info objects
  2. # TYPE gossip_infos_received counter
  3. gossip_infos_received 0
  4. # HELP sys_cgocalls Total number of cgo calls
  5. # TYPE sys_cgocalls gauge
  6. sys_cgocalls 3501
  7. # HELP sys_cpu_sys_percent Current system cpu percentage
  8. # TYPE sys_cpu_sys_percent gauge
  9. sys_cpu_sys_percent 1.098855319644276e-10
  10. # HELP replicas_quiescent Number of quiesced replicas
  11. # TYPE replicas_quiescent gauge
  12. replicas_quiescent{store="1"} 20
  13. ...

Note:
In addition to using the exported timeseries data to monitor a cluster via an external system, you can write alerting rules against them to make sure you are promptly notified of critical events or issues that may require intervention or investigation. See Events to Alert On for more details.

Health endpoints

CockroachDB provides two HTTP endpoints for checking the health of individual nodes.

/health

If a node is down, the http://<host>:<http-port>/health endpoint returns a Connnection refused error:

  1. $ curl http://localhost:8080/health
  1. curl: (7) Failed to connect to localhost port 8080: Connection refused

Otherwise, it returns an HTTP 200 OK status response code with details about the node:

  1. {
  2. "nodeId": 1,
  3. "address": {
  4. "networkField": "tcp",
  5. "addressField": "JESSEs-MBP:26257"
  6. },
  7. "buildInfo": {
  8. "goVersion": "go1.9",
  9. "tag": "v2.0-alpha.20180212-629-gf1271b232-dirty",
  10. "time": "2018/02/21 04:09:53",
  11. "revision": "f1271b2322a4a1060461707bdccd77b6d5a1843e",
  12. "cgoCompiler": "4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)",
  13. "platform": "darwin amd64",
  14. "distribution": "CCL",
  15. "type": "development",
  16. "dependencies": null
  17. }
  18. }

/health?ready=1

The http://<node-host>:<http-port>/health?ready=1 endpoint returns an HTTP 503 Service Unavailable status response code with an error in the following scenarios:

  • The node is being decommissioned or in the process of shutting down and is therefore not able to accept SQL connections and execute queries. This is especially useful for making sure load balancers do not direct traffic to nodes that are live but not "ready", which is a necessary check during rolling upgrades.

Tip:
If you find that your load balancer's health check is not always recognizing a node as unready before the node shuts down, you can increase the server.shutdown.drain_wait cluster setting to cause a node to return 503 Service Unavailable even before it has started shutting down.

  • The node is unable to communicate with a majority of the other nodes in the cluster, likely because the cluster is unavailable due to too many nodes being down.
  1. $ curl http://localhost:8080/health?ready=1
  1. {
  2. "error": "node is not ready",
  3. "code": 14
  4. }

Otherwise, it returns an HTTP 200 OK status response code with an empty body:

  1. {
  2. }

Raw status endpoints

Several endpoints return raw status metrics in JSON at http://<host>:<http-port>/#/debug. Feel free to investigate and use these endpoints, but note that they are subject to change.

Raw Status Endpoints

Node status command

The cockroach node status command gives you metrics about the health and status of each node.

  • With the —ranges flag, you get granular range and replica details, including unavailability and under-replication.
  • With the —stats flag, you get granular disk usage details.
  • With the —decommission flag, you get details about the node decommissioning process.
  • With the —all flag, you get all of the above.

Events to alert on

Active monitoring helps you spot problems early, but it is also essential to create alerting rules that promptly send notifications when there are events that require investigation or intervention. This section identifies the most important events to create alerting rules for, with the Prometheus Endpoint metrics to use for detecting the events.

Tip:
If you use Prometheus for monitoring, you can also use our pre-defined alerting rules with Alertmanager. See Monitor CockroachDB with Prometheus for guidance.

Node is down

  • Rule: Send an alert when a node has been down for 5 minutes or more.

  • How to detect: If a node is down, its _status/vars endpoint will return a Connection refused error. Otherwise, the liveness_livenodes metric will be the total number of live nodes in the cluster.

Node is restarting too frequently

  • Rule: Send an alert if a node has restarted more than 5 times in 10 minutes.

  • How to detect: Calculate this using the number of times the sys_uptime metric in the node's _status/vars output was reset back to zero. The sys_uptime metric gives you the length of time, in seconds, that the cockroach process has been running.

Node is running low on disk space

  • Rule: Send an alert when a node has less than 15% of free space remaining.

  • How to detect: Divide the capacity metric by the capacity_available metric in the node's _status/vars output.

Node is not executing SQL

  • Rule: Send an alert when a node is not executing SQL despite having connections.

  • How to detect: The sql_conns metric in the node's _status/vars output will be greater than 0 while the sql_query_count metric will be 0. You can also break this down by statement type using sql_select_count, sql_insert_count, sql_update_count, and sql_delete_count.

CA certificate expires soon

  • Rule: Send an alert when the CA certificate on a node will expire in less than a year.

  • How to detect: Calculate this using the security_certificate_expiration_ca metric in the node's _status/vars output.

Node certificate expires soon

  • Rule: Send an alert when a node's certificate will expire in less than a year.

  • How to detect: Calculate this using the security_certificate_expiration_node metric in the node's _status/vars output.

Changefeed is experiencing high latency

  • Rule: Send an alert when the latency of any changefeed running on any node is higher than the set threshold, which depends on the gc.ttlseconds variable set in the cluster.

  • How to detect: Calculate this using a threshold, where the threshold is less than the value of the gc.ttlseconds variable. For example, changefeed.max_behind_nanos > [some threshold].

See also

Was this page helpful?
YesNo