Building Python function-based components

Building your own lightweight pipelines components using Python

Run in Google Colab View source on GitHub

A Kubeflow Pipelines component is a self-contained set of code that performs one step in your ML workflow. A pipeline component is composed of:

  • The component code, which implements the logic needed to perform a step in your ML workflow.

  • A component specification, which defines the following:

    • The component’s metadata, its name and description.
    • The component’s interface, the component’s inputs and outputs.
    • The component’s implementation, the Docker container image to run, how to pass inputs to your component code, and how to get the component’s outputs.

Python function-based components make it easier to iterate quickly by letting you build your component code as a Python function and generating the component specification for you. This document describes how to build Python function-based components and use them in your pipeline.

Before you begin

  1. Run the following command to install the Kubeflow Pipelines SDK. If you run this command in a Jupyter notebook, restart the kernel after installing the SDK.
  1. $ pip3 install kfp --upgrade
  1. Import the kfp and kfp.components packages.
  1. import kfp
  2. import kfp.components as comp
  1. Create an instance of the kfp.Client class. To find your Kubeflow Pipelines cluster’s hostname, open the Kubeflow Pipelines user interface in your browser. The URL of the Kubeflow Pipelines user interface is something like https://my-cluster.my-organization.com/pipelines. In this case, the hostname is my-cluster.my-organization.com.
  1. # If you run this command on a Jupyter notebook running on Kubeflow, you can
  2. # exclude the host parameter.
  3. # client = kfp.Client()
  4. client = kfp.Client(host='<your-kubeflow-pipelines-host-name>')

For more information about the Kubeflow Pipelines SDK, see the SDK reference guide.

Getting started with Python function-based components

This section demonstrates how to get started building Python function-based components by walking through the process of creating a simple component.

  1. Define your component’s code as a standalone python function. In this example, the function adds two floats and returns the sum of the two arguments.
  1. def add(a: float, b: float) -> float:
  2. '''Calculates sum of two arguments'''
  3. return a + b
  1. Use kfp.components.create_component_from_func to generate the component specification YAML and return a factory function that you can use to create kfp.dsl.ContainerOp class instances for your pipeline. The component specification YAML is a reusable and shareable definition of your component.
  1. add_op = comp.create_component_from_func(
  2. add, output_component_file='add_component.yaml')
  1. Create and run your pipeline. Learn more about creating and running pipelines.
  1. import kfp.dsl as dsl
  2. @dsl.pipeline(
  3. name='Addition pipeline',
  4. description='An example pipeline that performs addition calculations.'
  5. )
  6. def add_pipeline(
  7. a='1',
  8. b='7',
  9. ):
  10. # Passes a pipeline parameter and a constant value to the `add_op` factory
  11. # function.
  12. first_add_task = add_op(a, 4)
  13. # Passes an output reference from `first_add_task` and a pipeline parameter
  14. # to the `add_op` factory function. For operations with a single return
  15. # value, the output reference can be accessed as `task.output` or
  16. # `task.outputs['output_name']`.
  17. second_add_task = add_op(first_add_task.output, b)
  18. # Specify argument values for your pipeline run.
  19. arguments = {'a': '7', 'b': '8'}
  20. # Create a pipeline run, using the client you initialized in a prior step.
  21. client.create_run_from_pipeline_func(add_pipeline, arguments=arguments)

Building Python function-based components

Use the following instructions to build a Python function-based component:

  1. Define a standalone Python function. This function must meet the following requirements:

  2. Kubeflow Pipelines uses your function’s inputs and outputs to define your component’s interface. Learn more about passing data between components. Your function’s inputs and outputs must meet the following requirements:

  3. (Optional.) If your function has complex dependencies, choose or build a container image for your Python function to run in. Learn more about selecting or building your component’s container image.

  4. Call kfp.components.create_component_from_func(func) to convert your function into a pipeline component.

    • func: The Python function to convert.
    • base_image: (Optional.) Specify the Docker container image to run this function in. Learn more about selecting or building a container image.
    • output_component_file: (Optional.) Writes your component definition to a file. You can use this file to share the component with colleagues or reuse it in different pipelines.
    • packages_to_install: (Optional.) A list of versioned Python packages to install before running your function.

Using and installing Python packages

When Kubeflow Pipelines runs your pipeline, each component runs within a Docker container image on a Kubernetes Pod. To load the packages that your Python function depends on, one of the following must be true:

  • The package must be installed on the container image.
  • The package must be defined using the packages_to_install parameter of the kfp.components.create_component_from_func(func) function.
  • Your function must install the package. For example, your function can use the subprocess module to run a command like pip install that installs a package.

Selecting or building a container image

Currently, if you do not specify a container image, your Python-function based component uses the python:3.7 container image. If your function has complex dependencies, you may benefit from using a container image that has your dependencies preinstalled, or building a custom container image. Preinstalling your dependencies reduces the amount of time that your component runs in, since your component does not need to download and install packages each time it runs.

Many frameworks, such as TensorFlow and PyTorch, and cloud service providers offer prebuilt container images that have common dependencies installed.

If a prebuilt container is not available, you can build a custom container image with your Python function’s dependencies. For more information about building a custom container, read the Dockerfile reference guide in the Docker documentation.

If you build or select a container image, instead of using the default container image, the container image must use Python 3.5 or later.

Understanding how data is passed between components

When Kubeflow Pipelines runs your component, a container image is started in a Kubernetes Pod and your component’s inputs are passed in as command-line arguments. When your component has finished, the component’s outputs are returned as files.

Python function-based components make it easier to build pipeline components by building the component specification for you. Python function-based components also handle the complexity of passing inputs into your component and passing your function’s outputs back to your pipeline.

The following sections describe how to pass parameters by value and by file.

  • Parameters that are passed by value include numbers, booleans, and short strings. Kubeflow Pipelines passes parameters to your component by value, by passing the values as command-line arguments.
  • Parameters that are passed by file include CSV, images, and complex types. These files are stored in a location that is accessible to your component running on Kubernetes, such as a persistent volume claim or a cloud storage service. Kubeflow Pipelines passes parameters to your component by file, by passing their paths as a command-line argument.

Input and output parameter names

When you use the Kubeflow Pipelines SDK to convert your Python function to a pipeline component, the Kubeflow Pipelines SDK uses the function’s interface to define the interface of your component in the following ways.

  • Some arguments define input parameters.
  • Some arguments define output parameters.
  • The function’s return value is used as an output parameter. If the return value is a collections.namedtuple, the named tuple is used to return several small values.

Since you can pass parameters between components as a value or as a path, the Kubeflow Pipelines SDK removes common parameter suffixes that leak the component’s expected implementation. For example, a Python function-based component that ingests data and outputs CSV data may have an output argument that is defined as csv_path: comp.OutputPath(str). In this case, the output is the CSV data, not the path. So, the Kubeflow Pipelines SDK simplifies the output name to csv.

The Kubeflow Pipelines SDK uses the following rules to define the input and output parameter names in your component’s interface:

  • If the argument name ends with _path and the argument is annotated as an kfp.components.InputPath or kfp.components.OutputPath, the parameter name is the argument name with the trailing _path removed.
  • If the argument name ends with _file, the parameter name is the argument name with the trailing _file removed.
  • If you return a single small value from your component using the return statement, the output parameter is named output.
  • If you return several small values from your component by returning a collections.namedtuple, the Kubeflow Pipelines SDK uses the tuple’s field names as the output parameter names.

Otherwise, the Kubeflow Pipelines SDK uses the argument name as the parameter name.

Passing parameters by value

Python function-based components make it easier to pass parameters between components by value (such as numbers, booleans, and short strings), by letting you define your component’s interface by annotating your Python function. The supported types are int, float, bool, and str. You can also pass list or dict instances by value, if they contain small values, such as int, float, bool, or str values. If you do not annotate your function, these input parameters are passed as strings.

If your component returns multiple outputs by value, annotate your function with the typing.NamedTuple type hint and use the collections.namedtuple function to return your function’s outputs as a new subclass of tuple.

You can also return metadata and metrics from your function.

The following example demonstrates how to return multiple outputs by value, including component metadata and metrics.

  1. from typing import NamedTuple
  2. def multiple_return_values_example(a: float, b: float) -> NamedTuple(
  3. 'ExampleOutputs',
  4. [
  5. ('sum', float),
  6. ('product', float),
  7. ('mlpipeline_ui_metadata', 'UI_metadata'),
  8. ('mlpipeline_metrics', 'Metrics')
  9. ]):
  10. """Example function that demonstrates how to return multiple values."""
  11. sum_value = a + b
  12. product_value = a * b
  13. # Export a sample tensorboard
  14. metadata = {
  15. 'outputs' : [{
  16. 'type': 'tensorboard',
  17. 'source': 'gs://ml-pipeline-dataset/tensorboard-train',
  18. }]
  19. }
  20. # Export two metrics
  21. metrics = {
  22. 'metrics': [
  23. {
  24. 'name': 'sum',
  25. 'numberValue': float(sum_value),
  26. },{
  27. 'name': 'product',
  28. 'numberValue': float(product_value),
  29. }
  30. ]
  31. }
  32. from collections import namedtuple
  33. example_output = namedtuple(
  34. 'ExampleOutputs',
  35. ['sum', 'product', 'mlpipeline_ui_metadata', 'mlpipeline_metrics'])
  36. return example_output(sum_value, product_value, metadata, metrics)

Passing parameters by file

Python function-based components make it easier to pass files to your component, or to return files from your component, by letting you annotate your Python function’s parameters to specify which parameters refer to a file. Your Python function’s parameters can refer to either input or output files. If your parameter is an output file, Kubeflow Pipelines passes your function a path or stream that you can use to store your output file.

The following example accepts a file as an input and returns two files as outputs.

  1. def split_text_lines(
  2. source_path: comp.InputPath(str),
  3. odd_lines_path: comp.OutputPath(str),
  4. even_lines_path: comp.OutputPath(str)):
  5. """Splits a text file into two files, with even lines going to one file
  6. and odd lines to the other."""
  7. with open(source_path, 'r') as reader:
  8. with open(odd_lines_path, 'w') as odd_writer:
  9. with open(even_lines_path, 'w') as even_writer:
  10. while True:
  11. line = reader.readline()
  12. if line == "":
  13. break
  14. odd_writer.write(line)
  15. line = reader.readline()
  16. if line == "":
  17. break
  18. even_writer.write(line)

In this example, the inputs and outputs are defined as parameters of the split_text_lines function. This lets Kubeflow Pipelines pass the path to the source data file and the paths to the output data files into the function.

To accept a file as an input parameter, use one of the following type annotations:

To return a file as an output, use one of the following type annotations:

Example Python function-based component

This section demonstrates how to build a Python function-based component that uses imports, helper functions, and produces multiple outputs.

  1. Define your function. This example function uses the numpy package to calculate the quotient and remainder for a given dividend and divisor in a helper function. In addition to the quotient and remainder, the function also returns metadata for visualization and two metrics.
  1. from typing import NamedTuple
  2. def my_divmod(
  3. dividend: float,
  4. divisor: float) -> NamedTuple(
  5. 'MyDivmodOutput',
  6. [
  7. ('quotient', float),
  8. ('remainder', float),
  9. ('mlpipeline_ui_metadata', 'UI_metadata'),
  10. ('mlpipeline_metrics', 'Metrics')
  11. ]):
  12. '''Divides two numbers and calculate the quotient and remainder'''
  13. # Import the numpy package inside the component function
  14. import numpy as np
  15. # Define a helper function
  16. def divmod_helper(dividend, divisor):
  17. return np.divmod(dividend, divisor)
  18. (quotient, remainder) = divmod_helper(dividend, divisor)
  19. from tensorflow.python.lib.io import file_io
  20. import json
  21. # Export a sample tensorboard
  22. metadata = {
  23. 'outputs' : [{
  24. 'type': 'tensorboard',
  25. 'source': 'gs://ml-pipeline-dataset/tensorboard-train',
  26. }]
  27. }
  28. # Export two metrics
  29. metrics = {
  30. 'metrics': [{
  31. 'name': 'quotient',
  32. 'numberValue': float(quotient),
  33. },{
  34. 'name': 'remainder',
  35. 'numberValue': float(remainder),
  36. }]}
  37. from collections import namedtuple
  38. divmod_output = namedtuple('MyDivmodOutput',
  39. ['quotient', 'remainder', 'mlpipeline_ui_metadata',
  40. 'mlpipeline_metrics'])
  41. return divmod_output(quotient, remainder, json.dumps(metadata),
  42. json.dumps(metrics))
  1. Test your function by running it directly, or with unit tests.
  1. my_divmod(100, 7)
  1. This should return a result like the following:

    1. MyDivmodOutput(quotient=14, remainder=2, mlpipeline_ui_metadata='{"outputs": [{"type": "tensorboard", "source": "gs://ml-pipeline-dataset/tensorboard-train"}]}', mlpipeline_metrics='{"metrics": [{"name": "quotient", "numberValue": 14.0}, {"name": "remainder", "numberValue": 2.0}]}')
  2. Use kfp.components.create_component_from_func to return a factory function that you can use to create kfp.dsl.ContainerOp class instances for your pipeline. This example also specifies the base container image to run this function in.

  1. divmod_op = comp.create_component_from_func(
  2. my_divmod, base_image='tensorflow/tensorflow:1.11.0-py3')
  1. Define your pipeline. This example uses the divmod_op factory function and the add_op factory function from an earlier example.
  1. import kfp.dsl as dsl
  2. @dsl.pipeline(
  3. name='Calculation pipeline',
  4. description='An example pipeline that performs arithmetic calculations.'
  5. )
  6. def calc_pipeline(
  7. a='1',
  8. b='7',
  9. c='17',
  10. ):
  11. # Passes a pipeline parameter and a constant value as operation arguments.
  12. add_task = add_op(a, 4) # The add_op factory function returns
  13. # a dsl.ContainerOp class instance.
  14. # Passes the output of the add_task and a pipeline parameter as operation
  15. # arguments. For an operation with a single return value, the output
  16. # reference is accessed using `task.output` or
  17. # `task.outputs['output_name']`.
  18. divmod_task = divmod_op(add_task.output, b)
  19. # For an operation with multiple return values, output references are
  20. # accessed as `task.outputs['output_name']`.
  21. result_task = add_op(divmod_task.outputs['quotient'], c)
  1. Create and run your pipeline. Learn more about creating and running pipelines.
  1. # Specify pipeline argument values
  2. arguments = {'a': '7', 'b': '8'}
  3. # Submit a pipeline run
  4. client.create_run_from_pipeline_func(calc_pipeline, arguments=arguments)

Run in Google Colab View source on GitHub

Last modified 20.04.2021: Apply Docs Restructure to `v1.2-branch` = update `v1.2-branch` to current `master` v2 (#2612) (4e2602bd)