普通自定义函数(UDF)

用户自定义函数是重要的功能,因为它们极大地扩展了 Python Table API 程序的表达能力。

注意: 要执行 Python 用户自定义函数,客户端和集群端都需要安装 Python 3.6 以上版本(3.6、3.7 或 3.8),并安装 PyFlink。

标量函数(ScalarFunction)

PyFlink 支持在 Python Table API 程序中使用 Python 标量函数。 如果要定义 Python 标量函数, 可以继承 pyflink.table.udf 中的基类 ScalarFunction,并实现 eval 方法。 Python 标量函数的行为由名为 eval 的方法定义,eval 方法支持可变长参数,例如 eval(* args)

以下示例显示了如何定义自己的 Python 哈希函数、如何在 TableEnvironment 中注册它以及如何在作业中使用它。

  1. from pyflink.table.expressions import call, col
  2. from pyflink.table import DataTypes, TableEnvironment, EnvironmentSettings
  3. from pyflink.table.udf import ScalarFunction, udf
  4. class HashCode(ScalarFunction):
  5. def __init__(self):
  6. self.factor = 12
  7. def eval(self, s):
  8. return hash(s) * self.factor
  9. settings = EnvironmentSettings.in_batch_mode()
  10. table_env = TableEnvironment.create(settings)
  11. hash_code = udf(HashCode(), result_type=DataTypes.BIGINT())
  12. # 在 Python Table API 中使用 Python 自定义函数
  13. my_table.select(col("string"), col("bigint"), hash_code(col("bigint")), call(hash_code, col("bigint")))
  14. # 在 SQL API 中使用 Python 自定义函数
  15. table_env.create_temporary_function("hash_code", udf(HashCode(), result_type=DataTypes.BIGINT()))
  16. table_env.sql_query("SELECT string, bigint, hash_code(bigint) FROM MyTable")

除此之外,还支持在Python Table API程序中使用 Java / Scala 标量函数。

  1. '''
  2. Java code:
  3. // Java 类必须具有公共的无参数构造函数,并且可以在当前的Java类加载器中可以加载到。
  4. public class HashCode extends ScalarFunction {
  5. private int factor = 12;
  6. public int eval(String s) {
  7. return s.hashCode() * factor;
  8. }
  9. }
  10. '''
  11. from pyflink.table.expressions import call, col
  12. from pyflink.table import TableEnvironment, EnvironmentSettings
  13. settings = EnvironmentSettings.in_batch_mode()
  14. table_env = TableEnvironment.create(settings)
  15. # 注册 Java 函数
  16. table_env.create_java_temporary_function("hash_code", "my.java.function.HashCode")
  17. # 在 Python Table API 中使用 Java 函数
  18. my_table.select(call('hash_code', col("string")))
  19. # 在 SQL API 中使用 Java 函数
  20. table_env.sql_query("SELECT string, bigint, hash_code(string) FROM MyTable")

除了扩展基类 ScalarFunction 之外,还支持多种方式来定义 Python 标量函数。 以下示例显示了多种定义 Python 标量函数的方式。该函数需要两个类型为 bigint 的参数作为输入参数,并返回它们的总和作为结果。

  1. # 方式一:扩展基类 calarFunction
  2. class Add(ScalarFunction):
  3. def eval(self, i, j):
  4. return i + j
  5. add = udf(Add(), result_type=DataTypes.BIGINT())
  6. # 方式二:普通 Python 函数
  7. @udf(result_type=DataTypes.BIGINT())
  8. def add(i, j):
  9. return i + j
  10. # 方式三:lambda 函数
  11. add = udf(lambda i, j: i + j, result_type=DataTypes.BIGINT())
  12. # 方式四:callable 函数
  13. class CallableAdd(object):
  14. def __call__(self, i, j):
  15. return i + j
  16. add = udf(CallableAdd(), result_type=DataTypes.BIGINT())
  17. # 方式五:partial 函数
  18. def partial_add(i, j, k):
  19. return i + j + k
  20. add = udf(functools.partial(partial_add, k=1), result_type=DataTypes.BIGINT())
  21. # 注册 Python 自定义函数
  22. table_env.create_temporary_function("add", add)
  23. # 在 Python Table API 中使用 Python 自定义函数
  24. my_table.select(call('add', col('a'), col('b')))
  25. # 也可以在 Python Table API 中直接使用 Python 自定义函数
  26. my_table.select(add(col('a'), col('b')))

表值函数(TableFunction)

与 Python 用户自定义标量函数类似,Python 用户自定义表值函数以零个,一个或者多个列作为输入参数。但是,与标量函数不同的是,表值函数可以返回 任意数量的行作为输出而不是单个值。Python 用户自定义表值函数的返回类型可以是 Iterable,Iterator 或 generator 类型。

以下示例说明了如何定义自己的 Python 自定义表值函数,将其注册到 TableEnvironment 中,并在作业中使用它。

  1. from pyflink.table.expressions import col
  2. from pyflink.table import DataTypes, TableEnvironment, EnvironmentSettings
  3. from pyflink.table.udf import TableFunction, udtf
  4. class Split(TableFunction):
  5. def eval(self, string):
  6. for s in string.split(" "):
  7. yield s, len(s)
  8. env_settings = EnvironmentSettings.in_streaming_mode()
  9. table_env = TableEnvironment.create(env_settings)
  10. my_table = ... # type: Table, table schema: [a: String]
  11. # 注册 Python 表值函数
  12. split = udtf(Split(), result_types=[DataTypes.STRING(), DataTypes.INT()])
  13. # 在 Python Table API 中使用 Python 表值函数
  14. my_table.join_lateral(split(col("a")).alias("word", "length"))
  15. my_table.left_outer_join_lateral(split(col("a")).alias("word", "length"))
  16. # 在 SQL API 中使用 Python 表值函数
  17. table_env.create_temporary_function("split", udtf(Split(), result_types=[DataTypes.STRING(), DataTypes.INT()]))
  18. table_env.sql_query("SELECT a, word, length FROM MyTable, LATERAL TABLE(split(a)) as T(word, length)")
  19. table_env.sql_query("SELECT a, word, length FROM MyTable LEFT JOIN LATERAL TABLE(split(a)) as T(word, length) ON TRUE")

除此之外,还支持在 Python Table API 程序中使用 Java / Scala 表值函数。

  1. '''
  2. Java code:
  3. // 类型"Tuple2 <String,Integer>"代表,表值函数的输出类型为(String,Integer)。
  4. // Java类必须具有公共的无参数构造函数,并且可以在当前的Java类加载器中加载到。
  5. public class Split extends TableFunction<Tuple2<String, Integer>> {
  6. private String separator = " ";
  7. public void eval(String str) {
  8. for (String s : str.split(separator)) {
  9. // use collect(...) to emit a row
  10. collect(new Tuple2<String, Integer>(s, s.length()));
  11. }
  12. }
  13. }
  14. '''
  15. from pyflink.table.expressions import call, col
  16. from pyflink.table import TableEnvironment, EnvironmentSettings
  17. env_settings = EnvironmentSettings.in_streaming_mode()
  18. table_env = TableEnvironment.create(env_settings)
  19. my_table = ... # type: Table, table schema: [a: String]
  20. # 注册 Java 自定义函数。
  21. table_env.create_java_temporary_function("split", "my.java.function.Split")
  22. # 在 Python Table API 中使用表值函数。 "alias"指定表的字段名称。
  23. my_table.join_lateral(call('split', col('a')).alias("word", "length")).select(col('a'), col('word'), col('length'))
  24. my_table.left_outer_join_lateral(call('split', col('a')).alias("word", "length")).select(col('a'), col('word'), col('length'))
  25. # 注册 Python 函数。
  26. # 在SQL中将table函数与LATERAL和TABLE关键字一起使用。
  27. # CROSS JOIN表值函数(等效于Table API中的"join")。
  28. table_env.sql_query("SELECT a, word, length FROM MyTable, LATERAL TABLE(split(a)) as T(word, length)")
  29. # LEFT JOIN一个表值函数(等同于Table API中的"left_outer_join")。
  30. table_env.sql_query("SELECT a, word, length FROM MyTable LEFT JOIN LATERAL TABLE(split(a)) as T(word, length) ON TRUE")

像 Python 标量函数一样,您可以使用上述五种方式来定义 Python 表值函数。

注意 唯一的区别是,Python 表值函数的返回类型必须是 iterable(可迭代子类), iterator(迭代器) or generator(生成器)。

  1. # 方式一:生成器函数
  2. @udtf(result_types=DataTypes.BIGINT())
  3. def generator_func(x):
  4. yield 1
  5. yield 2
  6. # 方式二:返回迭代器
  7. @udtf(result_types=DataTypes.BIGINT())
  8. def iterator_func(x):
  9. return range(5)
  10. # 方式三:返回可迭代子类
  11. @udtf(result_types=DataTypes.BIGINT())
  12. def iterable_func(x):
  13. result = [1, 2, 3]
  14. return result

聚合函数(AggregateFunction)

A user-defined aggregate function (UDAGG) maps scalar values of multiple rows to a new scalar value.

NOTE: Currently the general user-defined aggregate function is only supported in the GroupBy aggregation and Group Window Aggregation in streaming mode. For batch mode, it’s currently not supported and it is recommended to use the Vectorized Aggregate Functions.

The behavior of an aggregate function is centered around the concept of an accumulator. The accumulator is an intermediate data structure that stores the aggregated values until a final aggregation result is computed.

For each set of rows that need to be aggregated, the runtime will create an empty accumulator by calling create_accumulator(). Subsequently, the accumulate(...) method of the aggregate function will be called for each input row to update the accumulator. Currently after each row has been processed, the get_value(...) method of the aggregate function will be called to compute the aggregated result.

The following example illustrates the aggregation process:

UDAGG mechanism

In the above example, we assume a table that contains data about beverages. The table consists of three columns (id, name, and price) and 5 rows. We would like to find the highest price of all beverages in the table, i.e., perform a max() aggregation.

In order to define an aggregate function, one has to extend the base class AggregateFunction in pyflink.table and implement the evaluation method named accumulate(...). The result type and accumulator type of the aggregate function can be specified by one of the following two approaches:

  • Implement the method named get_result_type() and get_accumulator_type().
  • Wrap the function instance with the decorator udaf in pyflink.table.udf and specify the parameters result_type and accumulator_type.

The following example shows how to define your own aggregate function and call it in a query.

  1. from pyflink.common import Row
  2. from pyflink.table import AggregateFunction, DataTypes, TableEnvironment, EnvironmentSettings
  3. from pyflink.table.expressions import call
  4. from pyflink.table.udf import udaf
  5. from pyflink.table.expressions import col, lit
  6. from pyflink.table.window import Tumble
  7. class WeightedAvg(AggregateFunction):
  8. def create_accumulator(self):
  9. # Row(sum, count)
  10. return Row(0, 0)
  11. def get_value(self, accumulator):
  12. if accumulator[1] == 0:
  13. return None
  14. else:
  15. return accumulator[0] / accumulator[1]
  16. def accumulate(self, accumulator, value, weight):
  17. accumulator[0] += value * weight
  18. accumulator[1] += weight
  19. def retract(self, accumulator, value, weight):
  20. accumulator[0] -= value * weight
  21. accumulator[1] -= weight
  22. def get_result_type(self):
  23. return DataTypes.BIGINT()
  24. def get_accumulator_type(self):
  25. return DataTypes.ROW([
  26. DataTypes.FIELD("f0", DataTypes.BIGINT()),
  27. DataTypes.FIELD("f1", DataTypes.BIGINT())])
  28. env_settings = EnvironmentSettings.in_streaming_mode()
  29. table_env = TableEnvironment.create(env_settings)
  30. # the result type and accumulator type can also be specified in the udaf decorator:
  31. # weighted_avg = udaf(WeightedAvg(), result_type=DataTypes.BIGINT(), accumulator_type=...)
  32. weighted_avg = udaf(WeightedAvg())
  33. t = table_env.from_elements([(1, 2, "Lee"),
  34. (3, 4, "Jay"),
  35. (5, 6, "Jay"),
  36. (7, 8, "Lee")]).alias("value", "count", "name")
  37. # call function "inline" without registration in Table API
  38. result = t.group_by(col("name")).select(weighted_avg(col("value"), col("count")).alias("avg")).execute()
  39. result.print()
  40. # register function
  41. table_env.create_temporary_function("weighted_avg", WeightedAvg())
  42. # call registered function in Table API
  43. result = t.group_by(col("name")).select(call("weighted_avg", col("value"), col("count")).alias("avg")).execute()
  44. result.print()
  45. # register table
  46. table_env.create_temporary_view("source", t)
  47. # call registered function in SQL
  48. result = table_env.sql_query(
  49. "SELECT weighted_avg(`value`, `count`) AS avg FROM source GROUP BY name").execute()
  50. result.print()
  51. # use the general Python aggregate function in GroupBy Window Aggregation
  52. tumble_window = Tumble.over(lit(1).hours) \
  53. .on(col("rowtime")) \
  54. .alias("w")
  55. result = t.window(tumble_window) \
  56. .group_by(col('w'), col('name')) \
  57. .select(col('w').start, col('w').end, weighted_avg(col('value'), col('count'))) \
  58. .execute()
  59. result.print()

The accumulate(...) method of our WeightedAvg class takes three input arguments. The first one is the accumulator and the other two are user-defined inputs. In order to calculate a weighted average value, the accumulator needs to store the weighted sum and count of all the data that have already been accumulated. In our example, we use a Row object as the accumulator. Accumulators will be managed by Flink’s checkpointing mechanism and are restored in case of failover to ensure exactly-once semantics.

Mandatory and Optional Methods

The following methods are mandatory for each AggregateFunction:

  • create_accumulator()
  • accumulate(...)
  • get_value(...)

The following methods of AggregateFunction are required depending on the use case:

  • retract(...) is required when there are operations that could generate retraction messages before the current aggregation operation, e.g. group aggregate, outer join.
    This method is optional, but it is strongly recommended to be implemented to ensure the UDAF can be used in any use case.
  • merge(...) is required for session window ang hop window aggregations.
  • get_result_type() and get_accumulator_type() is required if the result type and accumulator type would not be specified in the udaf decorator.

ListView and MapView

If an accumulator needs to store large amounts of data, pyflink.table.ListView and pyflink.table.MapView could be used instead of list and dict. These two data structures provide the similar functionalities as list and dict, however usually having better performance by leveraging Flink’s state backend to eliminate unnecessary state access. You can use them by declaring DataTypes.LIST_VIEW(...) and DataTypes.MAP_VIEW(...) in the accumulator type, e.g.:

  1. from pyflink.table import ListView
  2. class ListViewConcatAggregateFunction(AggregateFunction):
  3. def get_value(self, accumulator):
  4. # the ListView is iterable
  5. return accumulator[1].join(accumulator[0])
  6. def create_accumulator(self):
  7. return Row(ListView(), '')
  8. def accumulate(self, accumulator, *args):
  9. accumulator[1] = args[1]
  10. # the ListView support add, clear and iterate operations.
  11. accumulator[0].add(args[0])
  12. def get_accumulator_type(self):
  13. return DataTypes.ROW([
  14. # declare the first column of the accumulator as a string ListView.
  15. DataTypes.FIELD("f0", DataTypes.LIST_VIEW(DataTypes.STRING())),
  16. DataTypes.FIELD("f1", DataTypes.BIGINT())])
  17. def get_result_type(self):
  18. return DataTypes.STRING()

Currently, there are 2 limitations to use the ListView and MapView:

  1. The accumulator must be a Row.
  2. The ListView and MapView must be the first level children of the Row accumulator.

Please refer to the documentation of the corresponding classes for more information about this advanced feature.

NOTE: For reducing the data transmission cost between Python UDF worker and Java process caused by accessing the data in Flink states(e.g. accumulators and data views), there is a cached layer between the raw state handler and the Python state backend. You can adjust the values of these configuration options to change the behavior of the cache layer for best performance: python.state.cache-size, python.map-state.read-cache-size, python.map-state.write-cache-size, python.map-state.iterate-response-batch-size. For more details please refer to the Python Configuration Documentation.

Table Aggregate Functions

A user-defined table aggregate function (UDTAGG) maps scalar values of multiple rows to zero, one, or multiple rows (or structured types). The returned record may consist of one or more fields. If an output record consists of only a single field, the structured record can be omitted, and a scalar value can be emitted that will be implicitly wrapped into a row by the runtime.

NOTE: Currently the general user-defined table aggregate function is only supported in the GroupBy aggregation in streaming mode.

Similar to an aggregate function, the behavior of a table aggregate is centered around the concept of an accumulator. The accumulator is an intermediate data structure that stores the aggregated values until a final aggregation result is computed.

For each set of rows that needs to be aggregated, the runtime will create an empty accumulator by calling create_accumulator(). Subsequently, the accumulate(...) method of the function is called for each input row to update the accumulator. Once all rows have been processed, the emit_value(...) method of the function is called to compute and return the final result.

The following example illustrates the aggregation process:

UDTAGG mechanism

In the example, we assume a table that contains data about beverages. The table consists of three columns (id, name, and price) and 5 rows. We would like to find the 2 highest prices of all beverages in the table, i.e., perform a TOP2() table aggregation. We need to consider each of the 5 rows. The result is a table with the top 2 values.

In order to define a table aggregate function, one has to extend the base class TableAggregateFunction in pyflink.table and implement one or more evaluation methods named accumulate(...).

The result type and accumulator type of the aggregate function can be specified by one of the following two approaches:

  • Implement the method named get_result_type() and get_accumulator_type().
  • Wrap the function instance with the decorator udtaf in pyflink.table.udf and specify the parameters result_type and accumulator_type.

The following example shows how to define your own aggregate function and call it in a query.

  1. from pyflink.common import Row
  2. from pyflink.table import DataTypes, TableEnvironment, EnvironmentSettings
  3. from pyflink.table.expressions import col
  4. from pyflink.table.udf import udtaf, TableAggregateFunction
  5. class Top2(TableAggregateFunction):
  6. def emit_value(self, accumulator):
  7. yield Row(accumulator[0])
  8. yield Row(accumulator[1])
  9. def create_accumulator(self):
  10. return [None, None]
  11. def accumulate(self, accumulator, row):
  12. if row[0] is not None:
  13. if accumulator[0] is None or row[0] > accumulator[0]:
  14. accumulator[1] = accumulator[0]
  15. accumulator[0] = row[0]
  16. elif accumulator[1] is None or row[0] > accumulator[1]:
  17. accumulator[1] = row[0]
  18. def get_accumulator_type(self):
  19. return DataTypes.ARRAY(DataTypes.BIGINT())
  20. def get_result_type(self):
  21. return DataTypes.ROW(
  22. [DataTypes.FIELD("a", DataTypes.BIGINT())])
  23. env_settings = EnvironmentSettings.in_streaming_mode()
  24. table_env = TableEnvironment.create(env_settings)
  25. # the result type and accumulator type can also be specified in the udtaf decorator:
  26. # top2 = udtaf(Top2(), result_type=DataTypes.ROW([DataTypes.FIELD("a", DataTypes.BIGINT())]), accumulator_type=DataTypes.ARRAY(DataTypes.BIGINT()))
  27. top2 = udtaf(Top2())
  28. t = table_env.from_elements([(1, 'Hi', 'Hello'),
  29. (3, 'Hi', 'hi'),
  30. (5, 'Hi2', 'hi'),
  31. (7, 'Hi', 'Hello'),
  32. (2, 'Hi', 'Hello')],
  33. ['a', 'b', 'c'])
  34. # call function "inline" without registration in Table API
  35. t.group_by(col('b')).flat_aggregate(top2).select(col('*')).execute().print()
  36. # the result is:
  37. +----+--------------------------------+----------------------+
  38. | op | b | a |
  39. +----+--------------------------------+----------------------+
  40. | +I | Hi | 1 |
  41. | +I | Hi | <NULL> |
  42. | -D | Hi | 1 |
  43. | -D | Hi | <NULL> |
  44. | +I | Hi | 7 |
  45. | +I | Hi | 3 |
  46. | +I | Hi2 | 5 |
  47. | +I | Hi2 | <NULL> |
  48. +----+--------------------------------+----------------------+

The accumulate(...) method of our Top2 class takes two inputs. The first one is the accumulator and the second one is the user-defined input. In order to calculate a result, the accumulator needs to store the 2 highest values of all the data that has been accumulated. Accumulators are automatically managed by Flink’s checkpointing mechanism and are restored in case of a failure to ensure exactly-once semantics. The result values are emitted together with a ranking index.

Mandatory and Optional Methods

The following methods are mandatory for each TableAggregateFunction:

  • create_accumulator()
  • accumulate(...)
  • emit_value(...)

The following methods of TableAggregateFunction are required depending on the use case:

  • retract(...) is required when there are operations that could generate retraction messages before the current aggregation operation, e.g. group aggregate, outer join.
    This method is optional, but it is strongly recommended to be implemented to ensure the UDTAF can be used in any use case.
  • get_result_type() and get_accumulator_type() is required if the result type and accumulator type would not be specified in the udtaf decorator.

ListView and MapView

Similar to Aggregation function, we can also use ListView and MapView in Table Aggregate Function.

  1. from pyflink.common import Row
  2. from pyflink.table import ListView
  3. from pyflink.table.types import DataTypes
  4. from pyflink.table.udf import TableAggregateFunction
  5. class ListViewConcatTableAggregateFunction(TableAggregateFunction):
  6. def emit_value(self, accumulator):
  7. result = accumulator[1].join(accumulator[0])
  8. yield Row(result)
  9. yield Row(result)
  10. def create_accumulator(self):
  11. return Row(ListView(), '')
  12. def accumulate(self, accumulator, *args):
  13. accumulator[1] = args[1]
  14. accumulator[0].add(args[0])
  15. def get_accumulator_type(self):
  16. return DataTypes.ROW([
  17. DataTypes.FIELD("f0", DataTypes.LIST_VIEW(DataTypes.STRING())),
  18. DataTypes.FIELD("f1", DataTypes.BIGINT())])
  19. def get_result_type(self):
  20. return DataTypes.ROW([DataTypes.FIELD("a", DataTypes.STRING())])