Change Data Capture (CDC)

Change data capture (CDC) provides efficient, distributed, row-level change feeds into Apache Kafka for downstream processing such as reporting, caching, or full-text indexing.

What is change data capture?

While CockroachDB is an excellent system of record, it also needs to coexist with other systems. For example, you might want to keep your data mirrored in full-text indexes, analytics engines, or big data pipelines.

The core feature of CDC is the changefeed. Changefeeds target a whitelist of tables, called the "watched rows". Every change to a watched row is emitted as a record in a configurable format (JSON or Avro) to a configurable sink (Kafka).

Ordering guarantees

  • In most cases, each version of a row will be emitted once. However, some infrequent conditions (e.g., node failures, network partitions) will cause them to be repeated. This gives our changefeeds an at-least-once delivery guarantee.

  • Once a row has been emitted with some timestamp, no previously unseen versions of that row will be emitted with a lower timestamp. That is, you will never see a new change for that row at an earlier timestamp.

For example, if you ran the following:

  1. > CREATE TABLE foo (id INT PRIMARY KEY DEFAULT unique_rowid(), name STRING);
  2. > CREATE CHANGEFEED FOR TABLE foo INTO 'kafka://localhost:9092' WITH UPDATED;
  3. > INSERT INTO foo VALUES (1, 'Carl');
  4. > UPDATE foo SET name = 'Petee' WHERE id = 1;

You'd expect the changefeed to emit:

  1. [1] {"__crdb__": {"updated": <timestamp 1>}, "id": 1, "name": "Carl"}
  2. [1] {"__crdb__": {"updated": <timestamp 2>}, "id": 1, "name": "Petee"}

It is also possible that the changefeed emits an out of order duplicate of an earlier value that you already saw:

  1. [1] {"__crdb__": {"updated": <timestamp 1>}, "id": 1, "name": "Carl"}
  2. [1] {"__crdb__": {"updated": <timestamp 2>}, "id": 1, "name": "Petee"}
  3. [1] {"__crdb__": {"updated": <timestamp 1>}, "id": 1, "name": "Carl"}

However, you will never see an output like the following (i.e., an out of order row that you've never seen before):

  1. [1] {"__crdb__": {"updated": <timestamp 2>}, "id": 1, "name": "Petee"}
  2. [1] {"__crdb__": {"updated": <timestamp 1>}, "id": 1, "name": "Carl"}
  • If a row is modified more than once in the same transaction, only the last change will be emitted.

  • Rows are sharded between Kafka partitions by the row’s primary key.

  • The UPDATED option adds an "updated" timestamp to each emitted row. You can also use the RESOLVED option to emit periodic "resolved" timestamp messages to each Kafka partition. A "resolved" timestamp is a guarantee that no (previously unseen) rows with a lower update timestamp will be emitted on that partition.

For example:

  1. {"__crdb__": {"updated": "1532377312562986715.0000000000"}, "id": 1, "name": "Petee H"}
  2. {"__crdb__": {"updated": "1532377306108205142.0000000000"}, "id": 2, "name": "Carl"}
  3. {"__crdb__": {"updated": "1532377358501715562.0000000000"}, "id": 3, "name": "Ernie"}
  4. {"__crdb__":{"resolved":"1532379887442299001.0000000000"}}
  5. {"__crdb__":{"resolved":"1532379888444290910.0000000000"}}
  6. {"__crdb__":{"resolved":"1532379889448662988.0000000000"}}
  7. ...
  8. {"__crdb__":{"resolved":"1532379922512859361.0000000000"}}
  9. {"__crdb__": {"updated": "1532379923319195777.0000000000"}, "id": 4, "name": "Lucky"}
  • With duplicates removed, an individual row is emitted in the same order as the transactions that updated it. However, this is not true for updates to two different rows, even two rows in the same table. Resolved timestamp notifications on every Kafka partition can be used to provide strong ordering and global consistency guarantees by buffering records in between timestamp closures.

Because CockroachDB supports transactions that can affect any part of the cluster, it is not possible to horizontally divide the transaction log into independent changefeeds.

Avro schema changes

To ensure that the Avro schemas that CockroachDB publishes will work with the schema compatibility rules used by the Confluent schema registry, CockroachDB emits all fields in Avro as nullable unions. This ensures that Avro and Confluent consider the schemas to be both backward- and forward-compatible, since the Confluent Schema Registry has a different set of rules than Avro for schemas to be backward- and forward-compatible.

Note that the original CockroachDB column definition is also included in the schema as a doc field, so it's still possible to distinguish between a NOT NULL CockroachDB column and a NULL CockroachDB column.

Schema changes with column backfill

When schema changes with column backfill (e.g., adding a column with a default, adding a computed column, adding a NOT NULL column, dropping a column) are made to watched rows, the changefeed will emit some duplicates during the backfill. When it finishes, CockroachDB outputs all watched rows using the new schema. When using Avro, rows that have been backfilled by a schema change are always re-emitted.

For an example of a schema change with column backfill, start with the changefeed created in the example below:

  1. [1] {"id": 1, "name": "Petee H"}
  2. [2] {"id": 2, "name": "Carl"}
  3. [3] {"id": 3, "name": "Ernie"}

Add a column to the watched table:

  1. > ALTER TABLE office_dogs ADD COLUMN likes_treats BOOL DEFAULT TRUE;

The changefeed emits duplicate records 1, 2, and 3 before outputting the records using the new schema:

  1. [1] {"id": 1, "name": "Petee H"}
  2. [2] {"id": 2, "name": "Carl"}
  3. [3] {"id": 3, "name": "Ernie"}
  4. [1] {"id": 1, "name": "Petee H"} # Duplicate
  5. [2] {"id": 2, "name": "Carl"} # Duplicate
  6. [3] {"id": 3, "name": "Ernie"} # Duplicate
  7. [1] {"id": 1, "likes_treats": true, "name": "Petee H"}
  8. [2] {"id": 2, "likes_treats": true, "name": "Carl"}
  9. [3] {"id": 3, "likes_treats": true, "name": "Ernie"}

Enable rangefeeds to reduce latency

New in v19.1: Previously created changefeeds collect changes by periodically sending a request for any recent changes. Newly created changefeeds now behave differently: they connect a long-lived request (i.e., a rangefeed), which pushes changes as they happen. This reduces the latency of row changes, as well as reduces transaction restarts on tables being watched by a changefeed for some workloads.

To enable rangefeeds, set the kv.rangefeed.enabled cluster setting to true. Any created changefeed will error until this setting is enabled. Note that enabling rangefeeds currently has a small performance cost (about a 5-10% increase in latencies), whether or not the rangefeed is being using in a changefeed.

If you are experiencing an issue, you can revert back to the previous behavior by setting changefeed.push.enabled to false. Note that this setting will be removed in a future release; if you have to use the fallback, please file a Github issue.

Note:

To enable rangefeeds for an existing changefeed, you must also restart the changefeed. For an enterprise changefeed, pause and resume the changefeed. For a core changefeed, cut the connection (CTRL+C) and reconnect using the cursor option.

The kv.closed_timestamp.target_duration cluster setting can be used with push changefeeds. Resolved timestamps will always be behind by at least this setting's duration; however, decreasing the duration leads to more transaction restarts in your cluster, which can affect performance.

Create a changefeed (Core)

New in v19.1: To create a core changefeed:

  1. > EXPERIMENTAL CHANGEFEED FOR name;

For more information, see CHANGEFEED FOR.

Configure a changefeed (Enterprise)

Create

To create a changefeed:

  1. > CREATE CHANGEFEED FOR TABLE name INTO 'scheme://host:port';

For more information, see CREATE CHANGEFEED.

Pause

To pause a changefeed:

  1. > PAUSE JOB job_id;

For more information, see PAUSE JOB.

Resume

To resume a paused changefeed:

  1. > RESUME JOB job_id;

For more information, see RESUME JOB.

Cancel

To cancel a changefeed:

  1. > CANCEL JOB job_id;

For more information, see CANCEL JOB.

Monitor a changefeed

Note:

Monitoring is only available for enterprise changefeeds.

Changefeed progress is exposed as a high-water timestamp that advances as the changefeed progresses. This is a guarantee that all changes before or at the timestamp have been emitted. You can monitor a changefeed:

  1. > SELECT * FROM crdb_internal.jobs WHERE job_id = <job_id>;
  1. job_id | job_type | description | ... | high_water_timestamp | error | coordinator_id
  2. +--------------------+------------+------------------------------------------------------------------------+ ... +--------------------------------+-------+----------------+
  3. 383870400694353921 | CHANGEFEED | CREATE CHANGEFEED FOR TABLE office_dogs2 INTO 'kafka://localhost:9092' | ... | 1537279405671006870.0000000000 | | 1
  4. (1 row)

Note:

You can use the high-water timestamp to start a new changefeed where another ended.

Debug a changefeed

For changefeeds connected to Kafka, use log information to debug connection issues (i.e., kafka: client has run out of available brokers to talk to (Is your cluster reachable?)). Debug by looking for lines in the logs with [kafka-producer] in them:

  1. I190312 18:56:53.535646 585 vendor/github.com/Shopify/sarama/client.go:123 [kafka-producer] Initializing new client
  2. I190312 18:56:53.535714 585 vendor/github.com/Shopify/sarama/client.go:724 [kafka-producer] client/metadata fetching metadata for all topics from broker localhost:9092
  3. I190312 18:56:53.536730 569 vendor/github.com/Shopify/sarama/broker.go:148 [kafka-producer] Connected to broker at localhost:9092 (unregistered)
  4. I190312 18:56:53.537661 585 vendor/github.com/Shopify/sarama/client.go:500 [kafka-producer] client/brokers registered new broker #0 at 172.16.94.87:9092
  5. I190312 18:56:53.537686 585 vendor/github.com/Shopify/sarama/client.go:170 [kafka-producer] Successfully initialized new client

Usage examples

Create a core changefeed

New in v19.1: In this example, you'll set up a core changefeed for a single-node cluster.

  • In a terminal window, start cockroach:
  1. $ cockroach start --insecure --listen-addr=localhost --background
  1. $ cockroach sql --url="postgresql://root@127.0.0.1:26257?sslmode=disable" --format=csv

Note:

Because core changefeeds return results differently than other SQL statements, they require a dedicated database connection with specific settings around result buffering. In normal operation, CockroachDB improves performance by buffering results server-side before returning them to a client; however, result buffering is automatically turned off for core changefeeds. Core changefeeds also have different cancellation behavior than other queries: they can only be canceled by closing the underlying connection or issuing a CANCEL QUERY statement on a separate connection. Combined, these attributes of changefeeds mean that applications should explicitly create dedicated connections to consume changefeed data, instead of using a connection pool as most client drivers do by default.

Note:

To determine how wide the columns need to be, the default table display format in cockroach sql buffers the results it receives from the server before printing them to the console. When consuming core changefeed data using cockroach sql, it's important to use a display format like csv that does not buffer its results.

  1. > SET CLUSTER SETTING kv.rangefeed.enabled = true;
  • Create table foo:
  1. > CREATE TABLE foo (a INT PRIMARY KEY);
  • Insert a row into the table:
  1. > INSERT INTO foo VALUES (0);
  • Start the core changefeed:
  1. > EXPERIMENTAL CHANGEFEED FOR foo;
  1. table,key,value
  2. foo,[0],"{""after"": {""a"": 0}}"
  • In a new terminal, add another row:
  1. $ cockroach sql --insecure -e "INSERT INTO foo VALUES (1)"
  • Back in the terminal where the core changefeed is streaming, the following output has appeared:
  1. foo,[1],"{""after"": {""a"": 1}}"

Note that records may take a couple of seconds to display in the core changefeed.

  • To stop streaming the changefeed, enter CTRL+C into the terminal where the changefeed is running.

  • To stop cockroach, run:

  1. $ cockroach quit --insecure

Create a core changefeed using Avro

New in v19.1: In this example, you'll set up a core changefeed for a single-node cluster that emits Avro records. CockroachDB's Avro binary encoding convention uses the Confluent Schema Registry to store Avro schemas.

  • In a terminal window, start cockroach:
  1. $ cockroach start --insecure --listen-addr=localhost --background
  1. $ ./bin/confluent start

Only zookeeper, kafka, and schema-registry are needed. To troubleshoot Confluent, see their docs.

  1. $ cockroach sql --url="postgresql://root@127.0.0.1:26257?sslmode=disable" --format=csv

Note:

Because core changefeeds return results differently than other SQL statements, they require a dedicated database connection with specific settings around result buffering. In normal operation, CockroachDB improves performance by buffering results server-side before returning them to a client; however, result buffering is automatically turned off for core changefeeds. Core changefeeds also have different cancellation behavior than other queries: they can only be canceled by closing the underlying connection or issuing a CANCEL QUERY statement on a separate connection. Combined, these attributes of changefeeds mean that applications should explicitly create dedicated connections to consume changefeed data, instead of using a connection pool as most client drivers do by default.

  1. > SET CLUSTER SETTING kv.rangefeed.enabled = true;
  • Create table bar:
  1. > CREATE TABLE bar (a INT PRIMARY KEY);
  • Insert a row into the table:
  1. > INSERT INTO bar VALUES (0);
  • Start the core changefeed:
  1. > EXPERIMENTAL CHANGEFEED FOR bar WITH format = experimental_avro, confluent_schema_registry = 'http://localhost:8081';
  1. table,key,value
  2. bar,\000\000\000\000\001\002\000,\000\000\000\000\002\002\002\000
  • In a new terminal, add another row:
  1. $ cockroach sql --insecure -e "INSERT INTO bar VALUES (1)"
  • Back in the terminal where the core changefeed is streaming, the output will appear:
  1. bar,\000\000\000\000\001\002\002,\000\000\000\000\002\002\002\002

Note that records may take a couple of seconds to display in the core changefeed.

  • To stop streaming the changefeed, enter CTRL+C into the terminal where the changefeed is running.

  • To stop cockroach, run:

  1. $ cockroach quit --insecure
  • To stop Confluent, move into the extracted confluent-<version> directory and stop Confluent:
  1. $ ./bin/confluent stop

To stop all Confluent processes, use:

  1. $ ./bin/confluent destroy

Create a changefeed connected to Kafka

Note:

CREATE CHANGEFEED is an enterprise-only feature. For the core version, see the CHANGEFEED FOR example above.

In this example, you'll set up a changefeed for a single-node cluster that is connected to a Kafka sink.

  1. $ cockroach start --insecure --listen-addr=localhost --background
  • Download and extract the Confluent Open Source platform (which includes Kafka).

  • Move into the extracted confluent-<version> directory and start Confluent:

  1. $ ./bin/confluent start

Only zookeeper and kafka are needed. To troubleshoot Confluent, see their docs.

  • Create a Kafka topic:
  1. $ ./bin/kafka-topics \
  2. --create \
  3. --zookeeper localhost:2181 \
  4. --replication-factor 1 \
  5. --partitions 1 \
  6. --topic office_dogs

Note:

You are expected to create any Kafka topics with the necessary number of replications and partitions. Topics can be created manually or Kafka brokers can be configured to automatically create topics with a default partition count and replication factor.

  1. $ cockroach sql --insecure
  1. > SET CLUSTER SETTING cluster.organization = '<organization name>';
  1. > SET CLUSTER SETTING enterprise.license = '<secret>';
  1. > SET CLUSTER SETTING kv.rangefeed.enabled = true;
  • Create a database called cdc_demo:
  1. > CREATE DATABASE cdc_demo;
  • Set the database as the default:
  1. > SET DATABASE = cdc_demo;
  • Create a table and add data:
  1. > CREATE TABLE office_dogs (
  2. id INT PRIMARY KEY,
  3. name STRING);
  1. > INSERT INTO office_dogs VALUES
  2. (1, 'Petee'),
  3. (2, 'Carl');
  1. > UPDATE office_dogs SET name = 'Petee H' WHERE id = 1;
  • Start the changefeed:
  1. > CREATE CHANGEFEED FOR TABLE office_dogs INTO 'kafka://localhost:9092';
  1. job_id
  2. +--------------------+
  3. 360645287206223873
  4. (1 row)

This will start up the changefeed in the background and return the job_id. The changefeed writes to Kafka.

  • In a new terminal, move into the extracted confluent-<version> directory and start watching the Kafka topic:
  1. $ ./bin/kafka-console-consumer \
  2. --bootstrap-server=localhost:9092 \
  3. --property print.key=true \
  4. --from-beginning \
  5. --topic=office_dogs
  1. [1] {"id": 1, "name": "Petee H"}
  2. [2] {"id": 2, "name": "Carl"}

Note that the initial scan displays the state of the table as of when the changefeed started (therefore, the initial value of "Petee" is omitted).

  • Back in the SQL client, insert more data:
  1. > INSERT INTO office_dogs VALUES (3, 'Ernie');
  • Back in the terminal where you're watching the Kafka topic, the following output has appeared:
  1. [3] {"id": 3, "name": "Ernie"}
  • When you are done, exit the SQL shell (\q).

  • To stop cockroach, run:

  1. $ cockroach quit --insecure
  • To stop Kafka, move into the extracted confluent-<version> directory and stop Confluent:
  1. $ ./bin/confluent stop

Create a changefeed connected to Kafka using Avro

Note:

CREATE CHANGEFEED is an enterprise-only feature. For the core version, see the CHANGEFEED FOR example above.

In this example, you'll set up a changefeed for a single-node cluster that is connected to a Kafka sink and emits Avro records.

  1. $ cockroach start --insecure --listen-addr=localhost --background
  • Download and extract the Confluent Open Source platform (which includes Kafka).

  • Move into the extracted confluent-<version> directory and start Confluent:

  1. $ ./bin/confluent start

Only zookeeper, kafka, and schema-registry are needed. To troubleshoot Confluent, see their docs.

  • Create a Kafka topic:
  1. $ ./bin/kafka-topics \
  2. --create \
  3. --zookeeper localhost:8081 \
  4. --replication-factor 1 \
  5. --partitions 1 \
  6. --topic office_dogs

Note:

You are expected to create any Kafka topics with the necessary number of replications and partitions. Topics can be created manually or Kafka brokers can be configured to automatically create topics with a default partition count and replication factor.

  1. $ cockroach sql --insecure
  1. > SET CLUSTER SETTING cluster.organization = '<organization name>';
  1. > SET CLUSTER SETTING enterprise.license = '<secret>';
  1. > SET CLUSTER SETTING kv.rangefeed.enabled = true;
  • Create a database called cdc_demo:
  1. > CREATE DATABASE cdc_demo;
  • Set the database as the default:
  1. > SET DATABASE = cdc_demo;
  • Create a table and add data:
  1. > CREATE TABLE office_dogs (
  2. id INT PRIMARY KEY,
  3. name STRING);
  1. > INSERT INTO office_dogs VALUES
  2. (1, 'Petee'),
  3. (2, 'Carl');
  1. > UPDATE office_dogs SET name = 'Petee H' WHERE id = 1;
  • Start the changefeed:
  1. > CREATE CHANGEFEED FOR TABLE office_dogs INTO 'kafka://localhost:9092' WITH format = experimental_avro, confluent_schema_registry = 'http://localhost:8081';
  1. job_id
  2. +--------------------+
  3. 360645287206223873
  4. (1 row)

This will start up the changefeed in the background and return the job_id. The changefeed writes to Kafka.

  • In a new terminal, move into the extracted confluent-<version> directory and start watching the Kafka topic:
  1. $ ./bin/kafka-avro-console-consumer \
  2. --bootstrap-server=localhost:9092 \
  3. --property print.key=true \
  4. --from-beginning \
  5. --topic=office_dogs
  1. {"id":1} {"id":1,"name":{"string":"Petee H"}}
  2. {"id":2} {"id":2,"name":{"string":"Carl"}}

Note that the initial scan displays the state of the table as of when the changefeed started (therefore, the initial value of "Petee" is omitted).

  • Back in the SQL client, insert more data:
  1. > INSERT INTO office_dogs VALUES (3, 'Ernie');
  • Back in the terminal where you're watching the Kafka topic, the following output has appeared:
  1. {"id":3} {"id":3,"name":{"string":"Ernie"}}
  • When you are done, exit the SQL shell (\q).

  • To stop cockroach, run:

  1. $ cockroach quit --insecure
  • To stop Kafka, move into the extracted confluent-<version> directory and stop Confluent:
  1. $ ./bin/confluent stop

Create a changefeed connected to a cloud storage sink

Note:

CREATE CHANGEFEED is an enterprise-only feature. For the core version, see the CHANGEFEED FOR example above.

Warning:

This is an experimental feature. The interface and output are subject to change.

New in v19.1: In this example, you'll set up a changefeed for a single-node cluster that is connected to an AWS S3 sink. Note that you can set up changefeeds for any of these cloud storage providers.

  1. $ cockroach start --insecure --listen-addr=localhost --background
  1. $ cockroach sql --insecure
  1. > SET CLUSTER SETTING cluster.organization = '<organization name>';
  1. > SET CLUSTER SETTING enterprise.license = '<secret>';
  1. > SET CLUSTER SETTING kv.rangefeed.enabled = true;
  • Create a database called cdc_demo:
  1. > CREATE DATABASE cdc_demo;
  • Set the database as the default:
  1. > SET DATABASE = cdc_demo;
  • Create a table and add data:
  1. > CREATE TABLE office_dogs (
  2. id INT PRIMARY KEY,
  3. name STRING);
  1. > INSERT INTO office_dogs VALUES
  2. (1, 'Petee'),
  3. (2, 'Carl');
  1. > UPDATE office_dogs SET name = 'Petee H' WHERE id = 1;
  • Start the changefeed:
  1. > CREATE CHANGEFEED FOR TABLE office_dogs INTO 'experimental-s3://example-bucket-name/test?AWS_ACCESS_KEY_ID=enter_key-here&AWS_SECRET_ACCESS_KEY=enter_key_here' with updated, resolved='10s';
  1. job_id
  2. +--------------------+
  3. 360645287206223873
  4. (1 row)

This will start up the changefeed in the background and return the job_id. The changefeed writes to AWS.

  1. $ cockroach quit --insecure

Known limitations

The following are limitations in the v19.1 release and will be addressed in the future:

See also

Was this page helpful?
YesNo