Python API Tutorial

In this guide we will start from scratch and go from setting up a Flink Python projectto running a Python Table API program.

Setting up a Python Project

Firstly, you can fire up your favorite IDE and create a Python project and thenyou need to install the PyFlink package. Pleasesee Build PyFlinkfor more details about this.

The first step in a Flink Python Table API program is to create a BatchTableEnvironment(or StreamTableEnvironment if you are writing a streaming job). It is the main entry pointfor Python Table API jobs.

  1. exec_env = ExecutionEnvironment.get_execution_environment()
  2. exec_env.set_parallelism(1)
  3. t_config = TableConfig()
  4. t_env = BatchTableEnvironment.create(exec_env, t_config)

The ExecutionEnvironment (or StreamExecutionEnvironment if you are writing a streaming job)can be used to set execution parameters, such as the restart strategy, default parallelism, etc.

The TableConfig can be used by setting the parameters such as the built-in catalog name, thethreshold where generating code, etc.

Next we will create a source table and a sink table.

  1. t_env.connect(FileSystem().path('/tmp/input')) \
  2. .with_format(OldCsv()
  3. .line_delimiter(' ')
  4. .field('word', DataTypes.STRING())) \
  5. .with_schema(Schema()
  6. .field('word', DataTypes.STRING())) \
  7. .register_table_source('mySource')
  8. t_env.connect(FileSystem().path('/tmp/output')) \
  9. .with_format(OldCsv()
  10. .field_delimiter('\t')
  11. .field('word', DataTypes.STRING())
  12. .field('count', DataTypes.BIGINT())) \
  13. .with_schema(Schema()
  14. .field('word', DataTypes.STRING())
  15. .field('count', DataTypes.BIGINT())) \
  16. .register_table_sink('mySink')

This registers a table named mySource and a table named mySink in theExecutionEnvironment. The table mySource has only one column: word.It represents the words read from file /tmp/input. The table mySink has two columns:word and count. It writes data to file /tmp/output, with \t as the field delimiter.

Then we need to create a job which reads input from table mySource, preforms someoperations and writes the results to table mySink.

  1. t_env.scan('mySource') \
  2. .group_by('word') \
  3. .select('word, count(1)') \
  4. .insert_into('mySink')

The last thing is to start the actual Flink Python Table API job. All operations, such ascreating sources, transformations and sinks only build up a graph of internal operations.Only when t_env.execute(job_name) is called, this graph of operations will be thrown on a cluster orexecuted on your local machine.

  1. t_env.execute("tutorial_job")

The complete code so far is as follows:

  1. from pyflink.dataset import ExecutionEnvironment
  2. from pyflink.table import TableConfig, DataTypes, BatchTableEnvironment
  3. from pyflink.table.descriptors import Schema, OldCsv, FileSystem
  4. exec_env = ExecutionEnvironment.get_execution_environment()
  5. exec_env.set_parallelism(1)
  6. t_config = TableConfig()
  7. t_env = BatchTableEnvironment.create(exec_env, t_config)
  8. t_env.connect(FileSystem().path('/tmp/input')) \
  9. .with_format(OldCsv()
  10. .line_delimiter(' ')
  11. .field('word', DataTypes.STRING())) \
  12. .with_schema(Schema()
  13. .field('word', DataTypes.STRING())) \
  14. .register_table_source('mySource')
  15. t_env.connect(FileSystem().path('/tmp/output')) \
  16. .with_format(OldCsv()
  17. .field_delimiter('\t')
  18. .field('word', DataTypes.STRING())
  19. .field('count', DataTypes.BIGINT())) \
  20. .with_schema(Schema()
  21. .field('word', DataTypes.STRING())
  22. .field('count', DataTypes.BIGINT())) \
  23. .register_table_sink('mySink')
  24. t_env.scan('mySource') \
  25. .group_by('word') \
  26. .select('word, count(1)') \
  27. .insert_into('mySink')
  28. t_env.execute("tutorial_job")

You can run this example in your IDE or on the command line (suppose the job script file isWordCount.py):

  1. $ python WordCount.py

The command builds and runs the Python Table API program in a local mini cluster.You can also submit the Python Table API program to a remote cluster, you can referJob Submission Examplesfor more details.

This should get you started with writing your own Flink Python Table API programs.To learn more about the Python Table API, you can referFlink Python Table API Docs for more details.