Fault Tolerance & Recovery

This page walks you through a simple demonstration of how CockroachDB remains available during, and recovers after, failure. Starting with a 3-node local cluster, you'll remove a node and see how the cluster continues uninterrupted. You'll then write some data while the node is offline, rejoin the node, and see how it catches up with the rest of the cluster. Finally, you'll add a fourth node, remove a node again, and see how missing replicas eventually re-replicate to the new node.

Before you begin

Make sure you have already installed CockroachDB.

Step 1. Start a 3-node cluster

Use the cockroach start command to start 3 nodes:

  1. # In a new terminal, start node 1:
  2. $ cockroach start \
  3. --insecure \
  4. --store=fault-node1 \
  5. --listen-addr=localhost:26257 \
  6. --http-addr=localhost:8080 \
  7. --join=localhost:26257,localhost:26258,localhost:26259
  1. # In a new terminal, start node 2:
  2. $ cockroach start \
  3. --insecure \
  4. --store=fault-node2 \
  5. --listen-addr=localhost:26258 \
  6. --http-addr=localhost:8081 \
  7. --join=localhost:26257,localhost:26258,localhost:26259
  1. # In a new terminal, start node 3:
  2. $ cockroach start \
  3. --insecure \
  4. --store=fault-node3 \
  5. --listen-addr=localhost:26259 \
  6. --http-addr=localhost:8082 \
  7. --join=localhost:26257,localhost:26258,localhost:26259

Step 2. Initialize the cluster

In a new terminal, use the cockroach init command to perform a one-time initialization of the cluster:

  1. $ cockroach init \
  2. --insecure \
  3. --host=localhost:26257

Step 3. Verify that the cluster is live

In a new terminal, use the cockroach sql command to connect the built-in SQL shell to any node:

  1. $ cockroach sql --insecure --host=localhost:26257
  1. > SHOW DATABASES;
  1. database_name
  2. +---------------+
  3. defaultdb
  4. postgres
  5. system
  6. (3 rows)

Exit the SQL shell:

  1. > \q

Step 4. Remove a node temporarily

In the terminal running node 2, press CTRL-C to stop the node.

Alternatively, you can open a new terminal and run the cockroach quit command against port 26258:

  1. $ cockroach quit --insecure --host=localhost:26258
  1. initiating graceful shutdown of server
  2. ok

Step 5. Verify that the cluster remains available

Switch to the terminal for the built-in SQL shell and reconnect the shell to node 1 (port 26257) or node 3 (port 26259):

  1. $ cockroach sql --insecure --host=localhost:26259
  1. > SHOW DATABASES;
  1. database_name
  2. +---------------+
  3. defaultdb
  4. postgres
  5. system
  6. (3 rows)

As you see, despite one node being offline, the cluster continues uninterrupted because a majority of replicas (2/3) remains available. If you were to remove another node, however, leaving only one node live, the cluster would be unresponsive until another node was brought back online.

Exit the SQL shell:

  1. > \q

Step 6. Write data while the node is offline

In the same terminal, use the cockroach workload command to generate an example startrek database:

  1. $ cockroach workload init startrek \
  2. 'postgresql://root@localhost:26257?sslmode=disable'

Then reconnect the SQL shell to node 1 (port 26257) or node 3 (port 26259) and verify that the new startrek database was added with two tables, episodes and quotes:

  1. $ cockroach sql --insecure --host=localhost:26259
  1. > SHOW DATABASES;
  1. database_name
  2. +---------------+
  3. defaultdb
  4. postgres
  5. startrek
  6. system
  7. (4 rows)
  1. > SHOW TABLES FROM startrek;
  1. table_name
  2. +------------+
  3. episodes
  4. quotes
  1. > SELECT * FROM startrek.episodes WHERE stardate > 5500;
  1. id | season | num | title | stardate
  2. +----+--------+-----+-----------------------------------+----------+
  3. 60 | 3 | 5 | Is There in Truth No Beauty? | 5630.7
  4. 62 | 3 | 7 | Day of the Dove | 5630.3
  5. 64 | 3 | 9 | The Tholian Web | 5693.2
  6. 65 | 3 | 10 | Plato's Stepchildren | 5784.2
  7. 66 | 3 | 11 | Wink of an Eye | 5710.5
  8. 69 | 3 | 14 | Whom Gods Destroy | 5718.3
  9. 70 | 3 | 15 | Let That Be Your Last Battlefield | 5730.2
  10. 73 | 3 | 18 | The Lights of Zetar | 5725.3
  11. 74 | 3 | 19 | Requiem for Methuselah | 5843.7
  12. 75 | 3 | 20 | The Way to Eden | 5832.3
  13. 76 | 3 | 21 | The Cloud Minders | 5818.4
  14. 77 | 3 | 22 | The Savage Curtain | 5906.4
  15. 78 | 3 | 23 | All Our Yesterdays | 5943.7
  16. 79 | 3 | 24 | Turnabout Intruder | 5928.5
  17. (14 rows)

Exit the SQL shell:

  1. > \q

Step 7. Rejoin the node to the cluster

Switch to the terminal for node 2, and rejoin the node to the cluster, using the same command that you used in step 1:

  1. $ cockroach start --insecure \
  2. --store=fault-node2 \
  3. --listen-addr=localhost:26258 \
  4. --http-addr=localhost:8081 \
  5. --join=localhost:26257
  1. CockroachDB node starting at 2018-04-30 15:10:52.34274101 +0000 UTC
  2. build: CCL v19.1.0 @ 2019/04/30 14:48:26 (go1.11.6)
  3. admin: http://localhost:8081
  4. sql: postgresql://root@localhost:26258?sslmode=disable
  5. logs: node2/logs
  6. store[0]: path=fault-node2
  7. status: restarted pre-existing node
  8. clusterID: {5638ba53-fb77-4424-ada9-8a23fbce0ae9}
  9. nodeID: 2

Step 8. Verify that the rejoined node has caught up

Switch to the terminal for the built-in SQL shell, connect the shell to the rejoined node 2 (port 26258), and check for the startrek data that was added while the node was offline:

  1. $ cockroach sql --insecure --host=localhost:26258
  1. > SELECT * FROM startrek.episodes WHERE stardate > 5500;
  1. id | season | num | title | stardate
  2. +----+--------+-----+-----------------------------------+----------+
  3. 60 | 3 | 5 | Is There in Truth No Beauty? | 5630.7
  4. 62 | 3 | 7 | Day of the Dove | 5630.3
  5. 64 | 3 | 9 | The Tholian Web | 5693.2
  6. 65 | 3 | 10 | Plato's Stepchildren | 5784.2
  7. 66 | 3 | 11 | Wink of an Eye | 5710.5
  8. 69 | 3 | 14 | Whom Gods Destroy | 5718.3
  9. 70 | 3 | 15 | Let That Be Your Last Battlefield | 5730.2
  10. 73 | 3 | 18 | The Lights of Zetar | 5725.3
  11. 74 | 3 | 19 | Requiem for Methuselah | 5843.7
  12. 75 | 3 | 20 | The Way to Eden | 5832.3
  13. 76 | 3 | 21 | The Cloud Minders | 5818.4
  14. 77 | 3 | 22 | The Savage Curtain | 5906.4
  15. 78 | 3 | 23 | All Our Yesterdays | 5943.7
  16. 79 | 3 | 24 | Turnabout Intruder | 5928.5
  17. (14 rows)

At first, while node 2 is catching up, it acts as a proxy to one of the other nodes with the data. This shows that even when a copy of the data is not local to the node, it has seamless access.

Soon enough, node 2 catches up entirely. To verify, open the Admin UI at http://localhost:8080 to see that all three nodes are listed, and the replica count is identical for each. This means that all data in the cluster has been replicated 3 times; there's a copy of every piece of data on each node.

Tip:
CockroachDB replicates data 3 times by default. You can customize the number and location of replicas for the entire cluster or for specific sets of data using replication zones.

CockroachDB Admin UI

Step 9. Add another node

Now, to prepare the cluster for a permanent node failure, open a new terminal and add a fourth node:

  1. $ cockroach start \
  2. --insecure \
  3. --store=fault-node4 \
  4. --listen-addr=localhost:26260 \
  5. --http-addr=localhost:8083 \
  6. --join=localhost:26257,localhost:26258,localhost:26259
  1. CockroachDB node starting at 2018-04-30 15:10:52.34274101 +0000 UTC
  2. build: CCL v19.1.0 @ 2019/04/30 14:48:26 (go1.11.6)
  3. admin: http://localhost:8083
  4. sql: postgresql://root@localhost:26260?sslmode=disable
  5. logs: node4/logs
  6. store[0]: path=fault-node4
  7. status: initialized new node, joined pre-existing cluster
  8. clusterID: {5638ba53-fb77-4424-ada9-8a23fbce0ae9}
  9. nodeID: 4

Step 10. Remove a node permanently

Again, switch to the terminal running node 2 and press CTRL-C to stop it.

Alternatively, you can open a new terminal and run the cockroach quit command against port 26258:

  1. $ cockroach quit --insecure --host=localhost:26258
  1. initiating graceful shutdown of server
  2. ok
  3. server drained and shutdown completed

Step 11. Verify that the cluster re-replicates missing replicas

Back in the Admin UI, you'll see 4 nodes listed. After about 1 minute, the dot next to node 2 will turn yellow, indicating that the node is not responding.

CockroachDB Admin UI

After about 10 minutes, node 2 will move into a Dead Nodes section, indicating that the node is not expected to come back. At this point, in the Live Nodes section, you should also see that the Replicas count for node 4 matches the count for node 1 and 3, the other live nodes. This indicates that all missing replicas (those that were on node 2) have been re-replicated to node 4.

CockroachDB Admin UI

Step 12. Stop the cluster

Once you're done with your test cluster, stop each node by switching to its terminal and pressing CTRL-C.

Tip:
For the last node, the shutdown process will take longer (about a minute) and will eventually force kill the node. This is because, with only 1 node still online, a majority of replicas are no longer available (2 of 3), and so the cluster is not operational. To speed up the process, press CTRL-C a second time.

If you do not plan to restart the cluster, you may want to remove the nodes' data stores:

  1. $ rm -rf fault-node1 fault-node2 fault-node3 fault-node4 fault-node5

What's next?

Explore other core CockroachDB benefits and features:

Was this page helpful?
YesNo