User-defined Functions

User-defined functions are important features, because they significantly extend the expressiveness of Python Table API programs.

NOTE: Python UDF execution requires Python version (3.5, 3.6 or 3.7) with PyFlink installed. It’s required on both the client side and the cluster side.

Scalar Functions

It supports to use Python scalar functions in Python Table API programs. In order to define a Python scalar function, one can extend the base class ScalarFunction in pyflink.table.udf and implement an evaluation method. The behavior of a Python scalar function is defined by the evaluation method which is named eval. The evaluation method can support variable arguments, such as eval(*args).

The following example shows how to define your own Python hash code function, register it in the TableEnvironment, and call it in a query. Note that you can configure your scalar function via a constructor before it is registered:

  1. class HashCode(ScalarFunction):
  2. def __init__(self):
  3. self.factor = 12
  4. def eval(self, s):
  5. return hash(s) * self.factor
  6. table_env = BatchTableEnvironment.create(env)
  7. # configure the off-heap memory of current taskmanager to enable the python worker uses off-heap memory.
  8. table_env.get_config().get_configuration().set_string("taskmanager.memory.task.off-heap.size", '80m')
  9. # register the Python function
  10. table_env.register_function("hash_code", udf(HashCode(), DataTypes.BIGINT(), DataTypes.BIGINT()))
  11. # use the Python function in Python Table API
  12. my_table.select("string, bigint, bigint.hash_code(), hash_code(bigint)")
  13. # use the Python function in SQL API
  14. table_env.sql_query("SELECT string, bigint, hash_code(bigint) FROM MyTable")

Note If not using RocksDB as state backend, you can also configure the python worker to use the managed memory of taskmanager by setting python.fn-execution.memory.managed to be true. Then there is no need to set the the configuration taskmanager.memory.task.off-heap.size.

It also supports to use Java/Scala scalar functions in Python Table API programs.

  1. '''
  2. Java code:
  3. // The Java class must have a public no-argument constructor and can be founded in current Java classloader.
  4. public class HashCode extends ScalarFunction {
  5. private int factor = 12;
  6. public int eval(String s) {
  7. return s.hashCode() * factor;
  8. }
  9. }
  10. '''
  11. table_env = BatchTableEnvironment.create(env)
  12. # configure the off-heap memory of current taskmanager to enable the python worker uses off-heap memory.
  13. table_env.get_config().get_configuration().set_string("taskmanager.memory.task.off-heap.size", '80m')
  14. # register the Java function
  15. table_env.register_java_function("hash_code", "my.java.function.HashCode")
  16. # use the Java function in Python Table API
  17. my_table.select("string.hash_code(), hash_code(string)")
  18. # use the Java function in SQL API
  19. table_env.sql_query("SELECT string, bigint, hash_code(string) FROM MyTable")

Note If not using RocksDB as state backend, you can also configure the python worker to use the managed memory of taskmanager by setting python.fn-execution.memory.managed to be true. Then there is no need to set the the configuration taskmanager.memory.task.off-heap.size.

There are many ways to define a Python scalar function besides extending the base class ScalarFunction. The following examples show the different ways to define a Python scalar function which takes two columns of bigint as the input parameters and returns the sum of them as the result.

  1. # option 1: extending the base class `ScalarFunction`
  2. class Add(ScalarFunction):
  3. def eval(self, i, j):
  4. return i + j
  5. add = udf(Add(), [DataTypes.BIGINT(), DataTypes.BIGINT()], DataTypes.BIGINT())
  6. # option 2: Python function
  7. @udf(input_types=[DataTypes.BIGINT(), DataTypes.BIGINT()], result_type=DataTypes.BIGINT())
  8. def add(i, j):
  9. return i + j
  10. # option 3: lambda function
  11. add = udf(lambda i, j: i + j, [DataTypes.BIGINT(), DataTypes.BIGINT()], DataTypes.BIGINT())
  12. # option 4: callable function
  13. class CallableAdd(object):
  14. def __call__(self, i, j):
  15. return i + j
  16. add = udf(CallableAdd(), [DataTypes.BIGINT(), DataTypes.BIGINT()], DataTypes.BIGINT())
  17. # option 5: partial function
  18. def partial_add(i, j, k):
  19. return i + j + k
  20. add = udf(functools.partial(partial_add, k=1), [DataTypes.BIGINT(), DataTypes.BIGINT()],
  21. DataTypes.BIGINT())
  22. # register the Python function
  23. table_env.register_function("add", add)
  24. # use the function in Python Table API
  25. my_table.select("add(a, b)")

Table Functions

Similar to a Python user-defined scalar function, a user-defined table function takes zero, one, or multiple scalar values as input parameters. However in contrast to a scalar function, it can return an arbitrary number of rows as output instead of a single value. The return type of a Python UDTF could be of types Iterable, Iterator or generator.

The following example shows how to define your own Python multi emit function, register it in the TableEnvironment, and call it in a query.

  1. class Split(TableFunction):
  2. def eval(self, string):
  3. for s in string.split(" "):
  4. yield s, len(s)
  5. env = StreamExecutionEnvironment.get_execution_environment()
  6. table_env = StreamTableEnvironment.create(env)
  7. my_table = ... # type: Table, table schema: [a: String]
  8. # configure the off-heap memory of current taskmanager to enable the python worker uses off-heap memory.
  9. table_env.get_config().get_configuration().set_string("taskmanager.memory.task.off-heap.size", '80m')
  10. # register the Python Table Function
  11. table_env.register_function("split", udtf(Split(), DataTypes.STRING(), [DataTypes.STRING(), DataTypes.INT()]))
  12. # use the Python Table Function in Python Table API
  13. my_table.join_lateral("split(a) as (word, length)")
  14. my_table.left_outer_join_lateral("split(a) as (word, length)")
  15. # use the Python Table function in SQL API
  16. table_env.sql_query("SELECT a, word, length FROM MyTable, LATERAL TABLE(split(a)) as T(word, length)")
  17. table_env.sql_query("SELECT a, word, length FROM MyTable LEFT JOIN LATERAL TABLE(split(a)) as T(word, length) ON TRUE")

Note If not using RocksDB as state backend, you can also configure the python worker to use the managed memory of taskmanager by setting python.fn-execution.memory.managed to be true. Then there is no need to set the the configuration taskmanager.memory.task.off-heap.size.

It also supports to use Java/Scala table functions in Python Table API programs.

  1. '''
  2. Java code:
  3. // The generic type "Tuple2<String, Integer>" determines the schema of the returned table as (String, Integer).
  4. // The java class must have a public no-argument constructor and can be founded in current java classloader.
  5. public class Split extends TableFunction<Tuple2<String, Integer>> {
  6. private String separator = " ";
  7. public void eval(String str) {
  8. for (String s : str.split(separator)) {
  9. // use collect(...) to emit a row
  10. collect(new Tuple2<String, Integer>(s, s.length()));
  11. }
  12. }
  13. }
  14. '''
  15. env = StreamExecutionEnvironment.get_execution_environment()
  16. table_env = StreamTableEnvironment.create(env)
  17. my_table = ... # type: Table, table schema: [a: String]
  18. # configure the off-heap memory of current taskmanager to enable the python worker uses off-heap memory.
  19. table_env.get_config().get_configuration().set_string("taskmanager.memory.task.off-heap.size", '80m')
  20. # Register the java function.
  21. table_env.register_java_function("split", "my.java.function.Split")
  22. # Use the table function in the Python Table API. "as" specifies the field names of the table.
  23. my_table.join_lateral("split(a) as (word, length)").select("a, word, length")
  24. my_table.left_outer_join_lateral("split(a) as (word, length)").select("a, word, length")
  25. # Register the python function.
  26. # Use the table function in SQL with LATERAL and TABLE keywords.
  27. # CROSS JOIN a table function (equivalent to "join" in Table API).
  28. table_env.sql_query("SELECT a, word, length FROM MyTable, LATERAL TABLE(split(a)) as T(word, length)")
  29. # LEFT JOIN a table function (equivalent to "left_outer_join" in Table API).
  30. table_env.sql_query("SELECT a, word, length FROM MyTable LEFT JOIN LATERAL TABLE(split(a)) as T(word, length) ON TRUE")

Note If not using RocksDB as state backend, you can also configure the python worker to use the managed memory of taskmanager by setting python.fn-execution.memory.managed to be true. Then there is no need to set the the configuration taskmanager.memory.task.off-heap.size.

Like Python scalar functions, you can use the above five ways to define Python TableFunctions.

Note The only difference is that the return type of Python Table Functions needs to be an iterable, iterator or generator.

  1. # option 1: generator function
  2. @udtf(input_types=DataTypes.BIGINT(), result_types=DataTypes.BIGINT())
  3. def generator_func(x):
  4. yield 1
  5. yield 2
  6. # option 2: return iterator
  7. @udtf(input_types=DataTypes.BIGINT(), result_types=DataTypes.BIGINT())
  8. def iterator_func(x):
  9. return range(5)
  10. # option 3: return iterable
  11. @udtf(input_types=DataTypes.BIGINT(), result_types=DataTypes.BIGINT())
  12. def iterable_func(x):
  13. result = [1, 2, 3]
  14. return result
  15. table_env.register_function("iterable_func", iterable_func)
  16. table_env.register_function("iterator_func", iterator_func)
  17. table_env.register_function("generator_func", generator_func)