Build Components and Pipelines

Building your own component and adding it to a pipeline

This page describes how to create a component for Kubeflow Pipelines and howto combine components into a pipeline. For an easier start, experiment withthe Kubeflow Pipelines samples.

Overview of pipelines and components

A pipeline is a description of a machine learning (ML) workflow, including allof the components of the workflow and how they work together. The pipelineincludes the definition of the inputs (parameters) required to run the pipelineand the inputs and outputs of each component.

A pipeline component is an implementation of a pipeline task. A componentrepresents a step in the workflow. Each component takes one or more inputs andmay produce one or more outputs. A component consists of an interface(inputs/outputs), the implementation (a Docker container image and command-linearguments) and metadata (name, description).

For more information, see the conceptual guides topipelinesand components.

Before you start

Set up your environment:

The examples on this page come from theXGBoost Spark pipeline samplein the Kubeflow Pipelines sample repository.

Create a container image for each component

This section assumes that you have already created a program to perform thetask required in a particular step of your ML workflow. For example, if thetask is to train an ML model, then you must have a program that does thetraining, such as the program thattrains an XGBoost model.

Create a Docker container image thatpackages your program. See theDocker filefor the example XGBoost model training program mentioned above. You can alsoexamine the genericbuild_image.shscript in the Kubeflow Pipelines repository of reusable components.

Your component can create outputs that the downstream components can use asinputs. Each output must be a string and the container image must write eachoutput to a separate local text file. For example, if a training component needsto output the path of the trained model, the component writes the path into alocal file, such as /output.txt. In the Python class that defines yourpipeline (see below) you canspecify how to map the content of local files to component outputs.

Create a Python function to wrap your component

Define a Python function to describe the interactions with the Docker containerimage that contains your pipeline component. For example, the followingPython function describes a component that trains an XGBoost model:

  1. def dataproc_train_op(
  2. project,
  3. region,
  4. cluster_name,
  5. train_data,
  6. eval_data,
  7. target,
  8. analysis,
  9. workers,
  10. rounds,
  11. output,
  12. is_classification=True
  13. ):
  14. if is_classification:
  15. config='gs://ml-pipeline-playground/trainconfcla.json'
  16. else:
  17. config='gs://ml-pipeline-playground/trainconfreg.json'
  18. return dsl.ContainerOp(
  19. name='Dataproc - Train XGBoost model',
  20. image='gcr.io/ml-pipeline/ml-pipeline-dataproc-train:ac833a084b32324b56ca56e9109e05cde02816a4',
  21. arguments=[
  22. '--project', project,
  23. '--region', region,
  24. '--cluster', cluster_name,
  25. '--train', train_data,
  26. '--eval', eval_data,
  27. '--analysis', analysis,
  28. '--target', target,
  29. '--package', 'gs://ml-pipeline-playground/xgboost4j-example-0.8-SNAPSHOT-jar-with-dependencies.jar',
  30. '--workers', workers,
  31. '--rounds', rounds,
  32. '--conf', config,
  33. '--output', output,
  34. ],
  35. file_outputs={
  36. 'output': '/output.txt',
  37. }
  38. )

The function must return a dsl.ContainerOp from theXGBoost Spark pipeline sample.

Note:

  • Each component must inherit fromdsl.ContainerOp.
  • Values in the arguments list that’s used by the dsl.ContainerOp constructor above must be either Python scalar types (such as str and int) or dsl.PipelineParam types. Each dsl.PipelineParam represents a parameter whose value is usually only known at run time. The value iseither provided by the user at pipeline run time or received as an output from an upstream component.
  • Although the value of each dsl.PipelineParam is only available at run time,you can still use the parameters inline in the arguments by using %svariable substitution. At run time the argument contains the value of theparameter.
  • file_outputs is a mapping between labels and local file paths. In the aboveexample, the content of /output.txt contains the string output of thecomponent. To reference the output in code:
  1. op = dataproc_train_op(...)
  2. op.outputs['label']

If there is only one output then you can also use op.output.

Define your pipeline as a Python function

You must describe each pipeline as a Python function. For example:

  1. @dsl.pipeline(
  2. name='XGBoost Trainer',
  3. description='A trainer that does end-to-end distributed training for XGBoost models.'
  4. )
  5. def xgb_train_pipeline(
  6. output,
  7. project,
  8. region='us-central1',
  9. train_data='gs://ml-pipeline-playground/sfpd/train.csv',
  10. eval_data='gs://ml-pipeline-playground/sfpd/eval.csv',
  11. schema='gs://ml-pipeline-playground/sfpd/schema.json',
  12. target='resolution',
  13. rounds=200,
  14. workers=2,
  15. true_label='ACTION',
  16. )

Note:

  • @dsl.pipeline is a required decoration including the name anddescription properties.
  • Input arguments show up as pipeline parameters on the Kubeflow Pipelines UI.As a Python rule, positional arguments appear first, followed by keywordarguments.
  • Each function argument is of typedsl.PipelineParam.The default values should all be of that type. The default values show up inthe Kubeflow Pipelines UI but the user can override them.

See the full code in theXGBoost Spark pipeline sample.

Compile the pipeline

After defining the pipeline in Python as described above, you must compile thepipeline to an intermediate representation before you can submit it to theKubeflow Pipelines service. The intermediate representation is a workflowspecification in the form of a YAML file compressed into a.tar.gz file.

Use the dsl-compile command to compile your pipeline:

  1. dsl-compile --py [path/to/python/file] --output [path/to/output/tar.gz]

Deploy the pipeline

Upload the generated .tar.gz file through the Kubeflow Pipelines UI. See theguide to getting started with the UI.

Next steps

Feedback

Was this page helpful?

Glad to hear it! Please tell us how we can improve.

Sorry to hear that. Please tell us how we can improve.

Last modified 09.10.2019: updating broken pipelines references (#1199) (e2040afb)