普通自定义函数(UDF)

用户自定义函数是重要的功能,因为它们极大地扩展了Python Table API程序的表达能力。

注意: 要执行Python用户自定义函数,客户端和集群端都需要安装Python版本(3.5、3.6、3.7 或 3.8),并安装PyFlink。

标量函数(ScalarFunction)

PyFlink支持在Python Table API程序中使用Python标量函数。 如果要定义Python标量函数, 可以继承pyflink.table.udf中的基类ScalarFunction,并实现eval方法。 Python标量函数的行为由名为eval的方法定义,eval方法支持可变长参数,例如eval(* args)

以下示例显示了如何定义自己的Python哈希函数、如何在TableEnvironment中注册它以及如何在作业中使用它。

  1. from pyflink.table.expressions import call
  2. class HashCode(ScalarFunction):
  3. def __init__(self):
  4. self.factor = 12
  5. def eval(self, s):
  6. return hash(s) * self.factor
  7. table_env = BatchTableEnvironment.create(env)
  8. hash_code = udf(HashCode(), result_type=DataTypes.BIGINT())
  9. # 在Python Table API中使用Python自定义函数
  10. my_table.select(my_table.string, my_table.bigint, hash_code(my_table.bigint), call(hash_code, my_table.bigint))
  11. # 在SQL API中使用Python自定义函数
  12. table_env.create_temporary_function("hash_code", udf(HashCode(), result_type=DataTypes.BIGINT()))
  13. table_env.sql_query("SELECT string, bigint, hash_code(bigint) FROM MyTable")

除此之外,还支持在Python Table API程序中使用Java / Scala标量函数。

  1. '''
  2. Java code:
  3. // Java类必须具有公共的无参数构造函数,并且可以在当前的Java类加载器中可以加载到。
  4. public class HashCode extends ScalarFunction {
  5. private int factor = 12;
  6. public int eval(String s) {
  7. return s.hashCode() * factor;
  8. }
  9. }
  10. '''
  11. from pyflink.table.expressions import call
  12. table_env = BatchTableEnvironment.create(env)
  13. # 注册Java函数
  14. table_env.create_java_temporary_function("hash_code", "my.java.function.HashCode")
  15. # 在Python Table API中使用Java函数
  16. my_table.select(call('hash_code', my_table.string))
  17. # 在SQL API中使用Java函数
  18. table_env.sql_query("SELECT string, bigint, hash_code(string) FROM MyTable")

除了扩展基类ScalarFunction之外,还支持多种方式来定义Python标量函数。 以下示例显示了多种定义Python标量函数的方式。该函数需要两个类型为bigint的参数作为输入参数,并返回它们的总和作为结果。

  1. # 方式一:扩展基类`ScalarFunction`
  2. class Add(ScalarFunction):
  3. def eval(self, i, j):
  4. return i + j
  5. add = udf(Add(), result_type=DataTypes.BIGINT())
  6. # 方式二:普通Python函数
  7. @udf(result_type=DataTypes.BIGINT())
  8. def add(i, j):
  9. return i + j
  10. # 方式三:lambda函数
  11. add = udf(lambda i, j: i + j, result_type=DataTypes.BIGINT())
  12. # 方式四:callable函数
  13. class CallableAdd(object):
  14. def __call__(self, i, j):
  15. return i + j
  16. add = udf(CallableAdd(), result_type=DataTypes.BIGINT())
  17. # 方式五:partial函数
  18. def partial_add(i, j, k):
  19. return i + j + k
  20. add = udf(functools.partial(partial_add, k=1), result_type=DataTypes.BIGINT())
  21. # 注册Python自定义函数
  22. table_env.create_temporary_function("add", add)
  23. # 在Python Table API中使用Python自定义函数
  24. my_table.select("add(a, b)")
  25. # 也可以在Python Table API中直接使用Python自定义函数
  26. my_table.select(add(my_table.a, my_table.b))

表值函数(TableFunction)

与Python用户自定义标量函数类似,Python用户自定义表值函数以零个,一个或者多个列作为输入参数。但是,与标量函数不同的是,表值函数可以返回 任意数量的行作为输出而不是单个值。Python用户自定义表值函数的返回类型可以是Iterable,Iterator或generator类型。

以下示例说明了如何定义自己的Python自定义表值函数,将其注册到TableEnvironment中,并在作业中使用它。

  1. class Split(TableFunction):
  2. def eval(self, string):
  3. for s in string.split(" "):
  4. yield s, len(s)
  5. env = StreamExecutionEnvironment.get_execution_environment()
  6. table_env = StreamTableEnvironment.create(env)
  7. my_table = ... # type: Table, table schema: [a: String]
  8. # 注册Python表值函数
  9. split = udtf(Split(), result_types=[DataTypes.STRING(), DataTypes.INT()])
  10. # 在Python Table API中使用Python表值函数
  11. my_table.join_lateral(split(my_table.a).alias("word, length"))
  12. my_table.left_outer_join_lateral(split(my_table.a).alias("word, length"))
  13. # 在SQL API中使用Python表值函数
  14. table_env.create_temporary_function("split", udtf(Split(), result_types=[DataTypes.STRING(), DataTypes.INT()]))
  15. table_env.sql_query("SELECT a, word, length FROM MyTable, LATERAL TABLE(split(a)) as T(word, length)")
  16. table_env.sql_query("SELECT a, word, length FROM MyTable LEFT JOIN LATERAL TABLE(split(a)) as T(word, length) ON TRUE")

除此之外,还支持在Python Table API程序中使用Java / Scala表值函数。

  1. '''
  2. Java code:
  3. // 类型"Tuple2 <String,Integer>"代表,表值函数的输出类型为(String,Integer)。
  4. // Java类必须具有公共的无参数构造函数,并且可以在当前的Java类加载器中加载到。
  5. public class Split extends TableFunction<Tuple2<String, Integer>> {
  6. private String separator = " ";
  7. public void eval(String str) {
  8. for (String s : str.split(separator)) {
  9. // use collect(...) to emit a row
  10. collect(new Tuple2<String, Integer>(s, s.length()));
  11. }
  12. }
  13. }
  14. '''
  15. from pyflink.table.expressions import call
  16. env = StreamExecutionEnvironment.get_execution_environment()
  17. table_env = StreamTableEnvironment.create(env)
  18. my_table = ... # type: Table, table schema: [a: String]
  19. # 注册java自定义函数。
  20. table_env.create_java_temporary_function("split", "my.java.function.Split")
  21. # 在Python Table API中使用表值函数。 "alias"指定表的字段名称。
  22. my_table.join_lateral(call('split', my_table.a).alias("word, length")).select(my_table.a, col('word'), col('length'))
  23. my_table.left_outer_join_lateral(call('split', my_table.a).alias("word, length")).select(my_table.a, col('word'), col('length'))
  24. # 注册python函数。
  25. # 在SQL中将table函数与LATERAL和TABLE关键字一起使用。
  26. # CROSS JOIN表值函数(等效于Table API中的"join")。
  27. table_env.sql_query("SELECT a, word, length FROM MyTable, LATERAL TABLE(split(a)) as T(word, length)")
  28. # LEFT JOIN一个表值函数(等同于Table API中的"left_outer_join")。
  29. table_env.sql_query("SELECT a, word, length FROM MyTable LEFT JOIN LATERAL TABLE(split(a)) as T(word, length) ON TRUE")

像Python标量函数一样,您可以使用上述五种方式来定义Python表值函数。

注意 唯一的区别是,Python表值函数的返回类型必须是iterable(可迭代子类), iterator(迭代器) or generator(生成器)。

  1. # 方式一:生成器函数
  2. @udtf(result_types=DataTypes.BIGINT())
  3. def generator_func(x):
  4. yield 1
  5. yield 2
  6. # 方式二:返回迭代器
  7. @udtf(result_types=DataTypes.BIGINT())
  8. def iterator_func(x):
  9. return range(5)
  10. # 方式三:返回可迭代子类
  11. @udtf(result_types=DataTypes.BIGINT())
  12. def iterable_func(x):
  13. result = [1, 2, 3]
  14. return result

聚合函数(AggregateFunction)

A user-defined aggregate function (UDAGG) maps scalar values of multiple rows to a new scalar value.

NOTE: Currently the general user-defined aggregate function is only supported in the GroupBy aggregation of the blink planner in streaming mode. For batch mode or windowed aggregation, it’s currently not supported and it is recommended to use the Vectorized Aggregate Functions.

The behavior of an aggregate function is centered around the concept of an accumulator. The accumulator is an intermediate data structure that stores the aggregated values until a final aggregation result is computed.

For each set of rows that need to be aggregated, the runtime will create an empty accumulator by calling create_accumulator(). Subsequently, the accumulate(...) method of the aggregate function will be called for each input row to update the accumulator. Currently after each row has been processed, the get_value(...) method of the aggregate function will be called to compute the aggregated result.

The following example illustrates the aggregation process:

UDAGG mechanism

In the above example, we assume a table that contains data about beverages. The table consists of three columns (id, name, and price) and 5 rows. We would like to find the highest price of all beverages in the table, i.e., perform a max() aggregation.

In order to define an aggregate function, one has to extend the base class AggregateFunction in pyflink.table and implement the evaluation method named accumulate(...). The result type and accumulator type of the aggregate function can be specified by one of the following two approaches:

  • Implement the method named get_result_type() and get_accumulator_type().
  • Wrap the function instance with the decorator udaf in pyflink.table.udf and specify the parameters result_type and accumulator_type.

The following example shows how to define your own aggregate function and call it in a query.

  1. from pyflink.common import Row
  2. from pyflink.datastream import StreamExecutionEnvironment
  3. from pyflink.table import AggregateFunction, DataTypes, StreamTableEnvironment
  4. from pyflink.table.expressions import call
  5. from pyflink.table.udf import udaf
  6. class WeightedAvg(AggregateFunction):
  7. def create_accumulator(self):
  8. # Row(sum, count)
  9. return Row(0, 0)
  10. def get_value(self, accumulator):
  11. if accumulator[1] == 0:
  12. return None
  13. else:
  14. return accumulator[0] / accumulator[1]
  15. def accumulate(self, accumulator, value, weight):
  16. accumulator[0] += value * weight
  17. accumulator[1] += weight
  18. def retract(self, accumulator, value, weight):
  19. accumulator[0] -= value * weight
  20. accumulator[1] -= weight
  21. def get_result_type(self):
  22. return DataTypes.BIGINT()
  23. def get_accumulator_type(self):
  24. return DataTypes.ROW([
  25. DataTypes.FIELD("f0", DataTypes.BIGINT()),
  26. DataTypes.FIELD("f1", DataTypes.BIGINT())])
  27. env = StreamExecutionEnvironment.get_execution_environment()
  28. table_env = StreamTableEnvironment.create(env)
  29. # the result type and accumulator type can also be specified in the udaf decorator:
  30. # weighted_avg = udaf(WeightedAvg(), result_type=DataTypes.BIGINT(), accumulator_type=...)
  31. weighted_avg = udaf(WeightedAvg())
  32. t = table_env.from_elements([(1, 2, "Lee"),
  33. (3, 4, "Jay"),
  34. (5, 6, "Jay"),
  35. (7, 8, "Lee")]).alias("value", "count", "name")
  36. # call function "inline" without registration in Table API
  37. result = t.group_by(t.name).select(weighted_avg(t.value, t.count).alias("avg")).to_pandas()
  38. print(result)
  39. # register function
  40. table_env.create_temporary_function("weighted_avg", WeightedAvg())
  41. # call registered function in Table API
  42. result = t.group_by(t.name).select(call("weighted_avg", t.value, t.count).alias("avg")).to_pandas()
  43. print(result)
  44. # register table
  45. table_env.create_temporary_view("source", t)
  46. # call registered function in SQL
  47. result = table_env.sql_query(
  48. "SELECT weighted_avg(`value`, `count`) AS avg FROM source GROUP BY name").to_pandas()
  49. print(result)

The accumulate(...) method of our WeightedAvg class takes three input arguments. The first one is the accumulator and the other two are user-defined inputs. In order to calculate a weighted average value, the accumulator needs to store the weighted sum and count of all the data that have already been accumulated. In our example, we use a Row object as the accumulator. Accumulators will be managed by Flink’s checkpointing mechanism and are restored in case of failover to ensure exactly-once semantics.

Mandatory and Optional Methods

The following methods are mandatory for each AggregateFunction:

  • create_accumulator()
  • accumulate(...)
  • get_value(...)

The following methods of AggregateFunction are required depending on the use case:

  • retract(...) is required when there are operations that could generate retraction messages before the current aggregation operation, e.g. group aggregate, outer join.
    This method is optional, but it is strongly recommended to be implemented to ensure the UDAF can be used in any use case.
  • get_result_type() and get_accumulator_type() is required if the result type and accumulator type would not be specified in the udaf decorator.

ListView and MapView

If an accumulator needs to store large amounts of data, pyflink.table.ListView and pyflink.table.MapView could be used instead of list and dict. These two data structures provide the similar functionalities as list and dict, however usually having better performance by leveraging Flink’s state backend to eliminate unnecessary state access. You can use them by declaring DataTypes.LIST_VIEW(...) and DataTypes.MAP_VIEW(...) in the accumulator type, e.g.:

  1. from pyflink.table import ListView
  2. class ListViewConcatAggregateFunction(AggregateFunction):
  3. def get_value(self, accumulator):
  4. # the ListView is iterable
  5. return accumulator[1].join(accumulator[0])
  6. def create_accumulator(self):
  7. return Row(ListView(), '')
  8. def accumulate(self, accumulator, *args):
  9. accumulator[1] = args[1]
  10. # the ListView support add, clear and iterate operations.
  11. accumulator[0].add(args[0])
  12. def get_accumulator_type(self):
  13. return DataTypes.ROW([
  14. # declare the first column of the accumulator as a string ListView.
  15. DataTypes.FIELD("f0", DataTypes.LIST_VIEW(DataTypes.STRING())),
  16. DataTypes.FIELD("f1", DataTypes.BIGINT())])
  17. def get_result_type(self):
  18. return DataTypes.STRING()

Currently there are 2 limitations to use the ListView and MapView:

  1. The accumulator must be a Row.
  2. The ListView and MapView must be the first level children of the Row accumulator.

Please refer to the documentation of the corresponding classes for more information about this advanced feature.

NOTE: For reducing the data transmission cost between Python UDF worker and Java process caused by accessing the data in Flink states(e.g. accumulators and data views), there is a cached layer between the raw state handler and the Python state backend. You can adjust the values of these configuration options to change the behavior of the cache layer for best performance: python.state.cache-size, python.map-state.read-cache-size, python.map-state.write-cache-size, python.map-state.iterate-response-batch-size. For more details please refer to the Python Configuration Documentation.