Apache Kafka SQL Connector

Scan Source: Unbounded Sink: Streaming Append Mode

The Kafka connector allows for reading data from and writing data into Kafka topics.

Dependencies

In order to use the Kafka connector the following dependencies are required for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.

Kafka versionMaven dependencySQL Client JAR
universalflink-connector-kafka_2.11Download

The Kafka connectors are not currently part of the binary distribution. See how to link with them for cluster execution here.

How to create a Kafka table

The example below shows how to create a Kafka table:

  1. CREATE TABLE KafkaTable (
  2. `user_id` BIGINT,
  3. `item_id` BIGINT,
  4. `behavior` STRING,
  5. `ts` TIMESTAMP(3) METADATA FROM 'timestamp'
  6. ) WITH (
  7. 'connector' = 'kafka',
  8. 'topic' = 'user_behavior',
  9. 'properties.bootstrap.servers' = 'localhost:9092',
  10. 'properties.group.id' = 'testGroup',
  11. 'scan.startup.mode' = 'earliest-offset',
  12. 'format' = 'csv'
  13. )

Available Metadata

The following connector metadata can be accessed as metadata columns in a table definition.

The R/W column defines whether a metadata field is readable (R) and/or writable (W). Read-only columns must be declared VIRTUAL to exclude them during an INSERT INTO operation.

KeyData TypeDescriptionR/W
topicSTRING NOT NULLTopic name of the Kafka record.R
partitionINT NOT NULLPartition ID of the Kafka record.R
headersMAP<STRING, BYTES> NOT NULLHeaders of the Kafka record as a map of raw bytes.R/W
leader-epochINT NULLLeader epoch of the Kafka record if available.R
offsetBIGINT NOT NULLOffset of the Kafka record in the partition.R
timestampTIMESTAMP(3) WITH LOCAL TIME ZONE NOT NULLTimestamp of the Kafka record.R/W
timestamp-typeSTRING NOT NULLTimestamp type of the Kafka record. Either “NoTimestampType”, “CreateTime” (also set when writing metadata), or “LogAppendTime”.R

The extended CREATE TABLE example demonstrates the syntax for exposing these metadata fields:

  1. CREATE TABLE KafkaTable (
  2. `event_time` TIMESTAMP(3) METADATA FROM 'timestamp',
  3. `partition` BIGINT METADATA VIRTUAL,
  4. `offset` BIGINT METADATA VIRTUAL,
  5. `user_id` BIGINT,
  6. `item_id` BIGINT,
  7. `behavior` STRING
  8. ) WITH (
  9. 'connector' = 'kafka',
  10. 'topic' = 'user_behavior',
  11. 'properties.bootstrap.servers' = 'localhost:9092',
  12. 'properties.group.id' = 'testGroup',
  13. 'scan.startup.mode' = 'earliest-offset',
  14. 'format' = 'csv'
  15. );

Format Metadata

The connector is able to expose metadata of the value format for reading. Format metadata keys are prefixed with 'value.'.

The following example shows how to access both Kafka and Debezium metadata fields:

  1. CREATE TABLE KafkaTable (
  2. `event_time` TIMESTAMP(3) METADATA FROM 'value.source.timestamp' VIRTUAL, -- from Debezium format
  3. `origin_table` STRING METADATA FROM 'value.source.table' VIRTUAL, -- from Debezium format
  4. `partition_id` BIGINT METADATA FROM 'partition' VIRTUAL, -- from Kafka connector
  5. `offset` BIGINT METADATA VIRTUAL, -- from Kafka connector
  6. `user_id` BIGINT,
  7. `item_id` BIGINT,
  8. `behavior` STRING
  9. ) WITH (
  10. 'connector' = 'kafka',
  11. 'topic' = 'user_behavior',
  12. 'properties.bootstrap.servers' = 'localhost:9092',
  13. 'properties.group.id' = 'testGroup',
  14. 'scan.startup.mode' = 'earliest-offset',
  15. 'value.format' = 'debezium-json'
  16. );

Connector Options

OptionRequiredDefaultTypeDescription
connector
required(none)StringSpecify what connector to use, for Kafka use: ‘kafka’.
topic
required for sink, optional for source(use ‘topic-pattern’ instead if not set)(none)StringTopic name(s) to read data from when the table is used as source. It also supports topic list for source by separating topic by semicolon like ‘topic-1;topic-2’. Note, only one of “topic-pattern” and “topic” can be specified for sources. When the table is used as sink, the topic name is the topic to write data to. Note topic list is not supported for sinks.
topic-pattern
optional(none)StringThe regular expression for a pattern of topic names to read from. All topics with names that match the specified regular expression will be subscribed by the consumer when the job starts running. Note, only one of “topic-pattern” and “topic” can be specified for sources.
properties.bootstrap.servers
required(none)StringComma separated list of Kafka brokers.
properties.group.id
required by source(none)StringThe id of the consumer group for Kafka source, optional for Kafka sink.
properties.*
optional(none)StringThis can set and pass arbitrary Kafka configurations. Suffix names must match the configuration key defined in Kafka Configuration documentation. Flink will remove the “properties.” key prefix and pass the transformed key and values to the underlying KafkaClient. For example, you can disable automatic topic creation via ‘properties.allow.auto.create.topics’ = ‘false’. But there are some configurations that do not support to set, because Flink will override them, e.g. ‘key.deserializer’ and ‘value.deserializer’.
format
required(none)StringThe format used to deserialize and serialize the value part of Kafka messages. Please refer to the formats page for more details and more format options. Note: Either this option or the ‘value.format’ option are required.
key.format
optional(none)StringThe format used to deserialize and serialize the key part of Kafka messages. Please refer to the formats page for more details and more format options. Note: If a key format is defined, the ‘key.fields’ option is required as well. Otherwise the Kafka records will have an empty key.
key.fields
optional[]List<String>Defines an explicit list of physical columns from the table schema that configure the data type for the key format. By default, this list is empty and thus a key is undefined. The list should look like ‘field1;field2’.
key.fields-prefix
optional(none)StringDefines a custom prefix for all fields of the key format to avoid name clashes with fields of the value format. By default, the prefix is empty. If a custom prefix is defined, both the table schema and ‘key.fields’ will work with prefixed names. When constructing the data type of the key format, the prefix will be removed and the non-prefixed names will be used within the key format. Please note that this option requires that ‘value.fields-include’ must be set to ‘EXCEPT_KEY’.
value.format
required(none)StringThe format used to deserialize and serialize the value part of Kafka messages. Please refer to the formats page for more details and more format options. Note: Either this option or the ‘format’ option are required.
value.fields-include
optionalALL

Enum

Possible values: [ALL, EXCEPT_KEY]
Defines a strategy how to deal with key columns in the data type of the value format. By default, ‘ALL’ physical columns of the table schema will be included in the value format which means that key columns appear in the data type for both the key and value format.
scan.startup.mode
optionalgroup-offsetsStringStartup mode for Kafka consumer, valid values are ‘earliest-offset’, ‘latest-offset’, ‘group-offsets’, ‘timestamp’ and ‘specific-offsets’. See the following Start Reading Position for more details.
scan.startup.specific-offsets
optional(none)StringSpecify offsets for each partition in case of ‘specific-offsets’ startup mode, e.g. ‘partition:0,offset:42;partition:1,offset:300’.
scan.startup.timestamp-millis
optional(none)LongStart from the specified epoch timestamp (milliseconds) used in case of ‘timestamp’ startup mode.
scan.topic-partition-discovery.interval
optional(none)DurationInterval for consumer to discover dynamically created Kafka topics and partitions periodically.
sink.partitioner
optional‘default’StringOutput partitioning from Flink’s partitions into Kafka’s partitions. Valid values are
  • default: use the kafka default partitioner to partition records.
  • fixed: each Flink partition ends up in at most one Kafka partition.
  • round-robin: a Flink partition is distributed to Kafka partitions sticky round-robin. It only works when record’s keys are not specified.
  • Custom FlinkKafkaPartitioner subclass: e.g. ‘org.mycompany.MyPartitioner’.
See the following Sink Partitioning for more details.
sink.semantic
optionalat-least-onceStringDefines the delivery semantic for the Kafka sink. Valid enumerationns are ‘at-least-once’, ‘exactly-once’ and ‘none’. See Consistency guarantees for more details.
sink.parallelism
optional(none)IntegerDefines the parallelism of the Kafka sink operator. By default, the parallelism is determined by the framework using the same parallelism of the upstream chained operator.

Features

Key and Value Formats

Both the key and value part of a Kafka record can be serialized to and deserialized from raw bytes using one of the given formats.

Value Format

Since a key is optional in Kafka records, the following statement reads and writes records with a configured value format but without a key format. The 'format' option is a synonym for 'value.format'. All format options are prefixed with the format identifier.

  1. CREATE TABLE KafkaTable (,
  2. `ts` TIMESTAMP(3) METADATA FROM 'timestamp',
  3. `user_id` BIGINT,
  4. `item_id` BIGINT,
  5. `behavior` STRING
  6. ) WITH (
  7. 'connector' = 'kafka',
  8. ...
  9. 'format' = 'json',
  10. 'json.ignore-parse-errors' = 'true'
  11. )

The value format will be configured with the following data type:

  1. ROW<`user_id` BIGINT, `item_id` BIGINT, `behavior` STRING>

Key and Value Format

The following example shows how to specify and configure key and value formats. The format options are prefixed with either the 'key' or 'value' plus format identifier.

  1. CREATE TABLE KafkaTable (
  2. `ts` TIMESTAMP(3) METADATA FROM 'timestamp',
  3. `user_id` BIGINT,
  4. `item_id` BIGINT,
  5. `behavior` STRING
  6. ) WITH (
  7. 'connector' = 'kafka',
  8. ...
  9. 'key.format' = 'json',
  10. 'key.json.ignore-parse-errors' = 'true',
  11. 'key.fields' = 'user_id;item_id',
  12. 'value.format' = 'json',
  13. 'value.json.fail-on-missing-field' = 'false',
  14. 'value.fields-include' = 'ALL'
  15. )

The key format includes the fields listed in 'key.fields' (using ';' as the delimiter) in the same order. Thus, it will be configured with the following data type:

  1. ROW<`user_id` BIGINT, `item_id` BIGINT>

Since the value format is configured with 'value.fields-include' = 'ALL', key fields will also end up in the value format’s data type:

  1. ROW<`user_id` BIGINT, `item_id` BIGINT, `behavior` STRING>

Overlapping Format Fields

The connector cannot split the table’s columns into key and value fields based on schema information if both key and value formats contain fields of the same name. The 'key.fields-prefix' option allows to give key columns a unique name in the table schema while keeping the original names when configuring the key format.

The following example shows a key and value format that both contain a version field:

  1. CREATE TABLE KafkaTable (
  2. `k_version` INT,
  3. `k_user_id` BIGINT,
  4. `k_item_id` BIGINT,
  5. `version` INT,
  6. `behavior` STRING
  7. ) WITH (
  8. 'connector' = 'kafka',
  9. ...
  10. 'key.format' = 'json',
  11. 'key.fields-prefix' = 'k_',
  12. 'key.fields' = 'k_version;k_user_id;k_item_id',
  13. 'value.format' = 'json',
  14. 'value.fields-include' = 'EXCEPT_KEY'
  15. )

The value format must be configured in 'EXCEPT_KEY' mode. The formats will be configured with the following data types:

  1. key format:
  2. ROW<`version` INT, `user_id` BIGINT, `item_id` BIGINT>
  3. value format:
  4. ROW<`version` INT, `behavior` STRING>

Topic and Partition Discovery

The config option topic and topic-pattern specifies the topics or topic pattern to consume for source. The config option topic can accept topic list using semicolon separator like ‘topic-1;topic-2’. The config option topic-pattern will use regular expression to discover the matched topic. For example, if the topic-pattern is test-topic-[0-9], then all topics with names that match the specified regular expression (starting with test-topic- and ending with a single digit)) will be subscribed by the consumer when the job starts running.

To allow the consumer to discover dynamically created topics after the job started running, set a non-negative value for scan.topic-partition-discovery.interval. This allows the consumer to discover partitions of new topics with names that also match the specified pattern.

Please refer to Kafka DataStream Connector documentation for more about topic and partition discovery.

Note that topic list and topic pattern only work in sources. In sinks, Flink currently only supports a single topic.

Start Reading Position

The config option scan.startup.mode specifies the startup mode for Kafka consumer. The valid enumerations are:

  • group-offsets: start from committed offsets in ZK / Kafka brokers of a specific consumer group.
  • earliest-offset: start from the earliest offset possible.
  • latest-offset: start from the latest offset.
  • timestamp: start from user-supplied timestamp for each partition.
  • specific-offsets: start from user-supplied specific offsets for each partition.

The default option value is group-offsets which indicates to consume from last committed offsets in ZK / Kafka brokers.

If timestamp is specified, another config option scan.startup.timestamp-millis is required to specify a specific startup timestamp in milliseconds since January 1, 1970 00:00:00.000 GMT.

If specific-offsets is specified, another config option scan.startup.specific-offsets is required to specify specific startup offsets for each partition, e.g. an option value partition:0,offset:42;partition:1,offset:300 indicates offset 42 for partition 0 and offset 300 for partition 1.

Changelog Source

Flink natively supports Kafka as a changelog source. If messages in Kafka topic is change event captured from other databases using CDC tools, then you can use a CDC format to interpret messages as INSERT/UPDATE/DELETE messages into Flink SQL system. Flink provides two CDC formats debezium-json and canal-json to interpret change events captured by Debezium and Canal. The changelog source is a very useful feature in many cases, such as synchronizing incremental data from databases to other systems, auditing logs, materialized views on databases, temporal join changing history of a database table and so on. See more about how to use the CDC formats in debezium-json and canal-json.

Sink Partitioning

The config option sink.partitioner specifies output partitioning from Flink’s partitions into Kafka’s partitions. By default, Flink uses the Kafka default partitioner to parititon records. It uses the sticky partition strategy for records with null keys and uses a murmur2 hash to compute the partition for a record with the key defined.

In order to control the routing of rows into partitions, a custom sink partitioner can be provided. The ‘fixed’ partitioner will write the records in the same Flink partition into the same partition, which could reduce the cost of the network connections.

Consistency guarantees

By default, a Kafka sink ingests data with at-least-once guarantees into a Kafka topic if the query is executed with checkpointing enabled.

With Flink’s checkpointing enabled, the kafka connector can provide exactly-once delivery guarantees.

Besides enabling Flink’s checkpointing, you can also choose three different modes of operating chosen by passing appropriate sink.semantic option:

  • NONE: Flink will not guarantee anything. Produced records can be lost or they can be duplicated.
  • AT_LEAST_ONCE (default setting): This guarantees that no records will be lost (although they can be duplicated).
  • EXACTLY_ONCE: Kafka transactions will be used to provide exactly-once semantic. Whenever you write to Kafka using transactions, do not forget about setting desired isolation.level (read_committed or read_uncommitted - the latter one is the default value) for any application consuming records from Kafka.

Please refer to Kafka documentation for more caveats about delivery guarantees.

Source Per-Partition Watermarks

Flink supports to emit per-partition watermarks for Kafka. Watermarks are generated inside the Kafka consumer. The per-partition watermarks are merged in the same way as watermarks are merged during streaming shuffles. The output watermark of the source is determined by the minimum watermark among the partitions it reads. If some partitions in the topics are idle, the watermark generator will not advance. You can alleviate this problem by setting the 'table.exec.source.idle-timeout' option in the table configuration.

Please refer to Kafka watermark strategies for more details.

Data Type Mapping

Kafka stores message keys and values as bytes, so Kafka doesn’t have schema or data types. The Kafka messages are deserialized and serialized by formats, e.g. csv, json, avro. Thus, the data type mapping is determined by specific formats. Please refer to Formats pages for more details.