Vectorized User-defined Functions

Vectorized Python user-defined functions are functions which are executed by transferring a batch of elements between JVM and Python VM in Arrow columnar format. The performance of vectorized Python user-defined functions are usually much higher than non-vectorized Python user-defined functions as the serialization/deserialization overhead and invocation overhead are much reduced. Besides, users could leverage the popular Python libraries such as Pandas, Numpy, etc for the vectorized Python user-defined functions implementation. These Python libraries are highly optimized and provide high-performance data structures and functions. It shares the similar way as the non-vectorized user-defined functions on how to define vectorized user-defined functions. Users only need to add an extra parameter func_type="pandas" in the decorator udf or udaf to mark it as a vectorized user-defined function.

NOTE: Python UDF execution requires Python version (3.5, 3.6, 3.7 or 3.8) with PyFlink installed. It’s required on both the client side and the cluster side.

Vectorized Scalar Functions

Vectorized Python scalar functions take pandas.Series as the inputs and return a pandas.Series of the same length as the output. Internally, Flink will split the input elements into batches, convert a batch of input elements into Pandas.Series and then call user-defined vectorized Python scalar functions for each batch of input elements. Please refer to the config option python.fn-execution.arrow.batch.size for more details on how to configure the batch size.

Vectorized Python scalar function could be used in any places where non-vectorized Python scalar functions could be used.

The following example shows how to define your own vectorized Python scalar function which computes the sum of two columns, and use it in a query:

  1. @udf(result_type=DataTypes.BIGINT(), func_type="pandas")
  2. def add(i, j):
  3. return i + j
  4. table_env = BatchTableEnvironment.create(env)
  5. # use the vectorized Python scalar function in Python Table API
  6. my_table.select(add(my_table.bigint, my_table.bigint))
  7. # use the vectorized Python scalar function in SQL API
  8. table_env.create_temporary_function("add", add)
  9. table_env.sql_query("SELECT add(bigint, bigint) FROM MyTable")

Vectorized Aggregate Functions

Vectorized Python aggregate functions takes one or more pandas.Series as the inputs and return one scalar value as output.

Note The return type does not support RowType and MapType for the time being.

Vectorized Python aggregate function could be used in GroupBy Aggregation(Batch), GroupBy Window Aggregation(Batch and Stream) and Over Window Aggregation(Batch and Stream bounded over window). For more details on the usage of Aggregations, you can refer to the relevant documentation.

Note Pandas UDAF does not support partial aggregation. Besides, all the data for a group or window will be loaded into memory at the same time during execution and so you must make sure that the data of a group or window could fit into the memory.

Note Pandas UDAF is only supported in Blink Planner.

The following example shows how to define your own vectorized Python aggregate function which computes mean, and use it in GroupBy Aggregation, GroupBy Window Aggregation and Over Window Aggregation:

  1. @udaf(result_type=DataTypes.FLOAT(), func_type="pandas")
  2. def mean_udaf(v):
  3. return v.mean()
  4. table_env = BatchTableEnvironment.create(
  5. environment_settings=EnvironmentSettings.new_instance()
  6. .in_batch_mode().use_blink_planner().build())
  7. my_table = ... # type: Table, table schema: [a: String, b: BigInt, c: BigInt]
  8. # use the vectorized Python aggregate function in GroupBy Aggregation
  9. my_table.group_by(my_table.a).select(my_table.a, mean_udaf(add(my_table.b)))
  10. # use the vectorized Python aggregate function in GroupBy Window Aggregation
  11. tumble_window = Tumble.over(expr.lit(1).hours) \
  12. .on(expr.col("rowtime")) \
  13. .alias("w")
  14. my_table.window(tumble_window) \
  15. .group_by("w") \
  16. .select("w.start, w.end, mean_udaf(b)")
  17. # use the vectorized Python aggregate function in Over Window Aggregation
  18. table_env.create_temporary_function("mean_udaf", mean_udaf)
  19. table_env.sql_query("""
  20. SELECT a,
  21. mean_udaf(b)
  22. over (PARTITION BY a ORDER BY rowtime
  23. ROWS BETWEEN UNBOUNDED preceding AND UNBOUNDED FOLLOWING)
  24. FROM MyTable""")

There are many ways to define a vectorized Python aggregate functions. The following examples show the different ways to define a vectorized Python aggregate function which takes two columns of bigint as the inputs and returns the sum of the maximum of them as the result.

  1. # option 1: extending the base class `AggregateFunction`
  2. class MaxAdd(AggregateFunction):
  3. def open(self, function_context):
  4. mg = function_context.get_metric_group()
  5. self.counter = mg.add_group("key", "value").counter("my_counter")
  6. self.counter_sum = 0
  7. def get_value(self, accumulator):
  8. # counter
  9. self.counter.inc(10)
  10. self.counter_sum += 10
  11. return accumulator[0]
  12. def create_accumulator(self):
  13. return []
  14. def accumulate(self, accumulator, *args):
  15. result = 0
  16. for arg in args:
  17. result += arg.max()
  18. accumulator.append(result)
  19. max_add = udaf(MaxAdd(), result_type=DataTypes.BIGINT(), func_type="pandas")
  20. # option 2: Python function
  21. @udaf(result_type=DataTypes.BIGINT(), func_type="pandas")
  22. def max_add(i, j):
  23. return i.max() + j.max()
  24. # option 3: lambda function
  25. max_add = udaf(lambda i, j: i.max() + j.max(), result_type=DataTypes.BIGINT(), func_type="pandas")
  26. # option 4: callable function
  27. class CallableMaxAdd(object):
  28. def __call__(self, i, j):
  29. return i.max() + j.max()
  30. max_add = udaf(CallableMaxAdd(), result_type=DataTypes.BIGINT(), func_type="pandas")
  31. # option 5: partial function
  32. def partial_max_add(i, j, k):
  33. return i.max() + j.max() + k
  34. max_add = udaf(functools.partial(partial_max_add, k=1), result_type=DataTypes.BIGINT(), func_type="pandas")