Dependency Management

There are requirements to use dependencies inside the Python API programs. For example, users may need to use third-party Python libraries in Python user-defined functions. In addition, in scenarios such as machine learning prediction, users may want to load a machine learning model inside the Python user-defined functions.

When the PyFlink job is executed locally, users could install the third-party Python libraries into the local Python environment, download the machine learning model to local, etc. However, this approach doesn’t work well when users want to submit the PyFlink jobs to remote clusters. In the following sections, we will introduce the options provided in PyFlink for these requirements.

Note Both Python DataStream API and Python Table API have provided APIs for each kind of dependency. If you are mixing use of Python DataStream API and Python Table API in a single job, you should specify the dependencies via Python DataStream API to make them work for both the Python DataStream API and Python Table API.

JAR Dependencies

If third-party JARs are used, you can specify the JARs in the Python Table API as following:

  1. # Specify a list of jar URLs via "pipeline.jars". The jars are separated by ";"
  2. # and will be uploaded to the cluster.
  3. # NOTE: Only local file URLs (start with "file://") are supported.
  4. table_env.get_config().get_configuration().set_string("pipeline.jars", "file:///my/jar/path/connector.jar;file:///my/jar/path/udf.jar")
  5. # It looks like the following on Windows:
  6. table_env.get_config().get_configuration().set_string("pipeline.jars", "file:///E:/my/jar/path/connector.jar;file:///E:/my/jar/path/udf.jar")
  7. # Specify a list of URLs via "pipeline.classpaths". The URLs are separated by ";"
  8. # and will be added to the classpath during job execution.
  9. # NOTE: The paths must specify a protocol (e.g. file://) and users should ensure that the URLs are accessible on both the client and the cluster.
  10. table_env.get_config().get_configuration().set_string("pipeline.classpaths", "file:///my/jar/path/connector.jar;file:///my/jar/path/udf.jar")

or in the Python DataStream API as following:

  1. # Use the add_jars() to add local jars and the jars will be uploaded to the cluster.
  2. # NOTE: Only local file URLs (start with "file://") are supported.
  3. stream_execution_environment.add_jars("file:///my/jar/path/connector1.jar", "file:///my/jar/path/connector2.jar")
  4. # It looks like the following on Windows:
  5. stream_execution_environment.add_jars("file:///E:/my/jar/path/connector1.jar", "file:///E:/my/jar/path/connector2.jar")
  6. # Use the add_classpaths() to add the dependent jars URLs into the classpath.
  7. # The URLs will also be added to the classpath of both the client and the cluster.
  8. # NOTE: The paths must specify a protocol (e.g. file://) and users should ensure that the
  9. # URLs are accessible on both the client and the cluster.
  10. stream_execution_environment.add_classpaths("file:///my/jar/path/connector1.jar", "file:///my/jar/path/connector2.jar")

or through the command line arguments --jarfile when submitting the job.

Note It only supports to specify one jar file with the command line argument --jarfile and so you need to build a fat jar if there are multiple jar files.

Python Dependencies

Python libraries

You may want to use third-part Python libraries in Python user-defined functions. There are multiple ways to specify the Python libraries.

You could specify them inside the code using Python Table API as following:

  1. table_env.add_python_file(file_path)

or using Python DataStream API as following:

  1. stream_execution_environment.add_python_file(file_path)

You could also specify the Python libraries using configuration python.files or via command line arguments -pyfs or --pyFiles when submitting the job.

Note The Python libraries could be local files or local directories. They will be added to the PYTHONPATH of the Python UDF worker.

requirements.txt

It also allows to specify a requirements.txt file which defines the third-party Python dependencies. These Python dependencies will be installed into the working directory and added to the PYTHONPATH of the Python UDF worker.

You could prepare the requirements.txt manually as following:

  1. echo numpy==1.16.5 >> requirements.txt
  2. echo pandas==1.0.0 >> requirements.txt

or using pip freeze which lists all the packages installed in the current Python environment:

  1. pip freeze > requirements.txt

The content of the requirements.txt file may look like the following:

  1. numpy==1.16.5
  2. pandas==1.0.0

You could manually edit it by removing unnecessary entries or adding extra entries, etc.

The requirements.txt file could then be specified inside the code using Python Table API as following:

  1. # requirements_cache_dir is optional
  2. table_env.set_python_requirements(
  3. requirements_file_path="/path/to/requirements.txt",
  4. requirements_cache_dir="cached_dir")

or using Python DataStream API as following:

  1. # requirements_cache_dir is optional
  2. stream_execution_environment.set_python_requirements(
  3. requirements_file_path="/path/to/requirements.txt",
  4. requirements_cache_dir="cached_dir")

Note For the dependencies which could not be accessed in the cluster, a directory which contains the installation packages of these dependencies could be specified using the parameter requirements_cached_dir. It will be uploaded to the cluster to support offline installation. You could prepare the requirements_cache_dir as following:

  1. pip download -d cached_dir -r requirements.txt --no-binary :all:

Note Please make sure that the prepared packages match the platform of the cluster, and the Python version used.

You could also specify the requirements.txt file using configuration python.requirements or via command line arguments -pyreq or --pyRequirements when submitting the job.

Note It will install the packages specified in the requirements.txt file using pip, so please make sure that pip (version >= 7.1.0) and setuptools (version >= 37.0.0) are available.

Archives

You may also want to specify archive files. The archive files could be used to specify custom Python virtual environments, data files, etc.

You could specify the archive files inside the code using Python Table API as following:

  1. table_env.add_python_archive(archive_path="/path/to/archive_file", target_dir=None)

or using Python DataStream API as following:

  1. stream_execution_environment.add_python_archive(archive_path="/path/to/archive_file", target_dir=None)

Note The parameter target_dir is optional. If specified, the archive file will be extracted to a directory with the specified name of target_dir during execution. Otherwise, the archive file will be extracted to a directory with the same name as the archive file.

Suppose you have specified the archive file as following:

  1. table_env.add_python_archive("/path/to/py_env.zip", "myenv")

Then, you could access the content of the archive file in Python user-defined functions as following:

  1. def my_udf():
  2. with open("myenv/py_env/data/data.txt") as f:
  3. ...

If you have not specified the parameter target_dir:

  1. table_env.add_python_archive("/path/to/py_env.zip")

You could then access the content of the archive file in Python user-defined functions as following:

  1. def my_udf():
  2. with open("py_env.zip/py_env/data/data.txt") as f:
  3. ...

Note The archive file will be extracted to the working directory of Python UDF worker and so you could access the files inside the archive file using relative path.

You could also specify the archive files using configuration python.archives or via command line arguments -pyarch or --pyArchives when submitting the job.

Note If the archive file contains a Python virtual environment, please make sure that the Python virtual environment matches the platform that the cluster is running on.

Note Currently, only zip files (i.e., zip, jar, whl, egg, etc) and tar files (i.e., tar, tar.gz, tgz) are supported.

Python interpreter

It supports to specify the path of the Python interpreter to execute Python worker.

You could specify the Python interpreter inside the code using Python Table API as following:

  1. table_env.get_config().set_python_executable("/path/to/python")

or using Python DataStream API as following:

  1. stream_execution_environment.set_python_executable("/path/to/python")

It also supports to use the Python interpreter inside an archive file.

  1. # Python Table API
  2. table_env.add_python_archive("/path/to/py_env.zip", "venv")
  3. table_env.get_config().set_python_executable("venv/py_env/bin/python")
  4. # Python DataStream API
  5. stream_execution_environment.add_python_archive("/path/to/py_env.zip", "venv")
  6. stream_execution_environment.set_python_executable("venv/py_env/bin/python")

You could also specify the Python interpreter using configuration python.executable or via command line arguments -pyexec or --pyExecutable when submitting the job.

Note If the path of the Python interpreter refers to the Python archive file, relative path should be used instead of absolute path.

Python interpreter of client

Python is needed at the client side to parse the Python user-defined functions during compiling the job.

You could specify the custom Python interpreter used at the client side by activating it in the current session.

  1. source my_env/bin/activate

or specify it using configuration python.client.executable, command line arguments -pyclientexec or --pyClientExecutable, environment variable PYFLINK_CLIENT_EXECUTABLE

How to specify Python Dependencies in Java/Scala Program

It also supports to use Python user-defined functions in the Java Table API programs or pure SQL programs. The following code shows a simple example on how to use the Python user-defined functions in a Java Table API program:

  1. import org.apache.flink.configuration.CoreOptions;
  2. import org.apache.flink.table.api.EnvironmentSettings;
  3. import org.apache.flink.table.api.TableEnvironment;
  4. TableEnvironment tEnv = TableEnvironment.create(
  5. EnvironmentSettings.inBatchMode());
  6. tEnv.getConfig().getConfiguration().set(CoreOptions.DEFAULT_PARALLELISM, 1);
  7. // register the Python UDF
  8. tEnv.executeSql("create temporary system function add_one as 'add_one.add_one' language python");
  9. tEnv.createTemporaryView("source", tEnv.fromValues(1L, 2L, 3L).as("a"));
  10. // use Python UDF in the Java Table API program
  11. tEnv.executeSql("select add_one(a) as a from source").collect();

You can refer to the SQL statement about CREATE FUNCTION for more details on how to create Python user-defined functions using SQL statements.

The Python dependencies could then be specified via the Python config options, such as python.archives, python.files, python.requirements, python.client.executable, python.executable. etc or through command line arguments when submitting the job.