Connectors

This page describes how to use connectors in PyFlink and highlights the details to be aware of when using Flink connectors in Python programs.

Note For general connector information and common configuration, please refer to the corresponding Java/Scala documentation.

Download connector and format jars

Since Flink is a Java/Scala-based project, for both connectors and formats, implementations are available as jars that need to be specified as job dependencies.

  1. table_env.get_config().set("pipeline.jars", "file:///my/jar/path/connector.jar;file:///my/jar/path/json.jar")

How to use connectors

In PyFlink’s Table API, DDL is the recommended way to define sources and sinks, executed via the execute_sql() method on the TableEnvironment. This makes the table available for use by the application.

  1. source_ddl = """
  2. CREATE TABLE source_table(
  3. a VARCHAR,
  4. b INT
  5. ) WITH (
  6. 'connector' = 'kafka',
  7. 'topic' = 'source_topic',
  8. 'properties.bootstrap.servers' = 'kafka:9092',
  9. 'properties.group.id' = 'test_3',
  10. 'scan.startup.mode' = 'latest-offset',
  11. 'format' = 'json'
  12. )
  13. """
  14. sink_ddl = """
  15. CREATE TABLE sink_table(
  16. a VARCHAR
  17. ) WITH (
  18. 'connector' = 'kafka',
  19. 'topic' = 'sink_topic',
  20. 'properties.bootstrap.servers' = 'kafka:9092',
  21. 'format' = 'json'
  22. )
  23. """
  24. t_env.execute_sql(source_ddl)
  25. t_env.execute_sql(sink_ddl)
  26. t_env.sql_query("SELECT a FROM source_table") \
  27. .execute_insert("sink_table").wait()

Below is a complete example of how to use a Kafka source/sink and the JSON format in PyFlink.

  1. from pyflink.table import TableEnvironment, EnvironmentSettings
  2. def log_processing():
  3. env_settings = EnvironmentSettings.in_streaming_mode()
  4. t_env = TableEnvironment.create(env_settings)
  5. # specify connector and format jars
  6. t_env.get_config().set("pipeline.jars", "file:///my/jar/path/connector.jar;file:///my/jar/path/json.jar")
  7. source_ddl = """
  8. CREATE TABLE source_table(
  9. a VARCHAR,
  10. b INT
  11. ) WITH (
  12. 'connector' = 'kafka',
  13. 'topic' = 'source_topic',
  14. 'properties.bootstrap.servers' = 'kafka:9092',
  15. 'properties.group.id' = 'test_3',
  16. 'scan.startup.mode' = 'latest-offset',
  17. 'format' = 'json'
  18. )
  19. """
  20. sink_ddl = """
  21. CREATE TABLE sink_table(
  22. a VARCHAR
  23. ) WITH (
  24. 'connector' = 'kafka',
  25. 'topic' = 'sink_topic',
  26. 'properties.bootstrap.servers' = 'kafka:9092',
  27. 'format' = 'json'
  28. )
  29. """
  30. t_env.execute_sql(source_ddl)
  31. t_env.execute_sql(sink_ddl)
  32. t_env.sql_query("SELECT a FROM source_table") \
  33. .execute_insert("sink_table").wait()
  34. if __name__ == '__main__':
  35. log_processing()

Predefined Sources and Sinks

Some data sources and sinks are built into Flink and are available out-of-the-box. These predefined data sources include reading from Pandas DataFrame, or ingesting data from collections. The predefined data sinks support writing to Pandas DataFrame.

from/to Pandas

PyFlink Tables support conversion to and from Pandas DataFrame.

  1. from pyflink.table.expressions import col
  2. import pandas as pd
  3. import numpy as np
  4. # Create a PyFlink Table
  5. pdf = pd.DataFrame(np.random.rand(1000, 2))
  6. table = t_env.from_pandas(pdf, ["a", "b"]).filter(col('a') > 0.5)
  7. # Convert the PyFlink Table to a Pandas DataFrame
  8. pdf = table.to_pandas()

from_elements()

from_elements() is used to create a table from a collection of elements. The element types must be acceptable atomic types or acceptable composite types.

  1. from pyflink.table import DataTypes
  2. table_env.from_elements([(1, 'Hi'), (2, 'Hello')])
  3. # use the second parameter to specify custom field names
  4. table_env.from_elements([(1, 'Hi'), (2, 'Hello')], ['a', 'b'])
  5. # use the second parameter to specify a custom table schema
  6. table_env.from_elements([(1, 'Hi'), (2, 'Hello')],
  7. DataTypes.ROW([DataTypes.FIELD("a", DataTypes.INT()),
  8. DataTypes.FIELD("b", DataTypes.STRING())]))

The above query returns a Table like:

  1. +----+-------+
  2. | a | b |
  3. +====+=======+
  4. | 1 | Hi |
  5. +----+-------+
  6. | 2 | Hello |
  7. +----+-------+

User-defined sources & sinks

In some cases, you may want to define custom sources and sinks. Currently, sources and sinks must be implemented in Java/Scala, but you can define a TableFactory to support their use via DDL. More details can be found in the Java/Scala documentation.