Confluent Avro Format

Format: Serialization Schema Format: Deserialization Schema

The Avro Schema Registry (avro-confluent) format allows you to read records that were serialized by the io.confluent.kafka.serializers.KafkaAvroSerializer and to write records that can in turn be read by the io.confluent.kafka.serializers.KafkaAvroDeserializer.

When reading (deserializing) a record with this format the Avro writer schema is fetched from the configured Confluent Schema Registry based on the schema version id encoded in the record while the reader schema is inferred from table schema.

When writing (serializing) a record with this format the Avro schema is inferred from the table schema and used to retrieve a schema id to be encoded with the data. The lookup is performed with in the configured Confluent Schema Registry under the subject given in avro-confluent.schema-registry.subject.

The Avro Schema Registry format can only be used in conjunction with Apache Kafka SQL connector.

Dependencies

In order to use the Avro Schema Registry format the following dependencies are required for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.

Maven dependencySQL Client JAR
flink-avro-confluent-registryDownload

How to create a table with Avro-Confluent format

Here is an example to create a table using Kafka connector and Confluent Avro format.

  1. CREATE TABLE user_behavior (
  2. user_id BIGINT,
  3. item_id BIGINT,
  4. category_id BIGINT,
  5. behavior STRING,
  6. ts TIMESTAMP(3)
  7. ) WITH (
  8. 'connector' = 'kafka',
  9. 'properties.bootstrap.servers' = 'localhost:9092',
  10. 'topic' = 'user_behavior',
  11. 'format' = 'avro-confluent',
  12. 'avro-confluent.schema-registry.url' = 'http://localhost:8081',
  13. 'avro-confluent.schema-registry.subject' = 'user_behavior'
  14. )

Format Options

OptionRequiredDefaultTypeDescription
format
required(none)StringSpecify what format to use, here should be ‘avro-confluent’.
avro-confluent.schema-registry.url
required(none)StringThe URL of the Confluent Schema Registry to fetch/register schemas
avro-confluent.schema-registry.subject
required by sink(none)StringThe Confluent Schema Registry subject under which to register the schema used by this format during serialization

Data Type Mapping

Currently, Apache Flink always uses the table schema to derive the Avro reader schema during deserialization and Avro writer schema during serialization. Explicitly defining an Avro schema is not supported yet. See the Apache Avro Format for the mapping between Avro and Flink DataTypes.

In addition to the types listed there, Flink supports reading/writing nullable types. Flink maps nullable types to Avro union(something, null), where something is the Avro type converted from Flink type.

You can refer to Avro Specification for more information about Avro types.