Deploying Kubeflow Pipelines on a local cluster

How to deploy Kubeflow Pipelines locally with kind, K3s, and K3ai for testing purposes

This guide shows how to deploy Kubeflow Pipelines standalone on a local Kubernetes cluster using:

  • kind
  • K3s
  • K3s on Windows Subsystem for Linux (WSL)
  • K3ai [alpha]

Such deployment methods can be part of your local environment using the supplied kustomize manifests for test purposes. This guide is an alternative to

Deploying Kubeflow Pipelines (KFP).

Before you get started

kind

1. Installing kind

kind is a tool for running local Kubernetes clusters using Docker container nodes. kind was primarily designed for testing Kubernetes itself. It can also be used for local development or CI.

You can install and configure kind by following the official quick start.

To get started with kind:

On Linux:

Download and move the kind executable to your directory in your PATH by running the following commands:

  1. curl -Lo ./kind https://kind.sigs.k8s.io/dl/{KIND_VERSION}/kind-linux-amd64 && \
  2. chmod +x ./kind && \
  3. mv ./kind /{YOUR_KIND_DIRECTORY}/kind

Replace the following:

  • {KIND_VERSION}: the kind version; for example, v0.8.1 as of the date this guide was written
  • {YOUR_KIND_DIRECTORY}: your directory in PATH

On macOS:

You can use Homebrew to install kind:

  1. brew install kind

On Windows:

  • You can use the administrative PowerShell console to run the following commands to download and move the kind executable to a directory in your PATH:

  • PowerShell: Run these commands to download and move the kind executable to a directory in your PATH:

    1. curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/{KIND_VERSION}/kind-windows-amd64
    2. Move-Item .\kind-windows-amd64.exe c:\{YOUR_KIND_DIRECTORY}\kind.exe

    Replace the following:

    • {KIND_VERSION}: the kind version - for example, v0.9 (check the latest stable binary versions on the kind releases pages)
    • {YOUR_KIND_DIRECTORY}: your directory for kind in PATH
  • {KIND_VERSION}: the kind version; for example, v0.8.1 as of the date this guide was written

  • {YOUR_KIND_DIRECTORY}: your directory in PATH

  • Alternatively, you can use Chocolatey https://chocolatey.org/packages/kind:

    1. choco install kind

Note: kind uses containerd as a default container-runtime hence you cannot use the standard kubeflow pipeline manifests.

References:

References:

2. Creating a cluster on kind

Having installed kind, you can create a Kubernetes cluster on kind with this command:

  1. kind create cluster

This will bootstrap a Kubernetes cluster using a pre-built node image. You can find that image on the Docker Hub kindest/node here. If you wish to build the node image yourself, you can use the kind build node-image command—see the official building image section for more details. And, to specify another image, use the --image flag.

By default, the cluster will be given the name kind. Use the --name flag to assign the cluster a different context name.

K3s

1. Setting up a cluster on K3s

K3s is a fully compliant Kubernetes distribution with the following enhancements:

  • Packaged as a single binary.

  • Lightweight storage backend based on sqlite3 as the default storage mechanism. etcd3, MySQL, Postgres also still available.

  • Wrapped in simple launcher that handles a lot of the complexity of TLS and options.

  • Secure by default with reasonable defaults for lightweight environments.

  • Simple but powerful “batteries-included” features have been added, such as: a local storage provider, a service load balancer, a Helm controller, and the Traefik ingress controller.

  • Operation of all Kubernetes control plane components is encapsulated in a single binary and process. This allows K3s to automate and manage complex cluster operations like distributing certificates.

  • External dependencies have been minimized (just a modern kernel and cgroup mounts needed). K3s packages required dependencies, including:

    • containerd
    • Flannel
    • CoreDNS
    • CNI
    • Host utilities (iptables, socat, etc)
    • Ingress controller (traefik)
    • Embedded service loadbalancer
    • Embedded network policy controller

You can find the the official K3s installation script to install it as a service on systemd- or openrc-based systems on the official K3s website.

To install K3s using that method, run the following command:

  1. curl -sfL https://get.k3s.io | sh -

References:

2. Creating a cluster on K3s

  1. To create a Kubernetes cluster on K3s, use the following command:

    1. sudo k3s server &

    This will bootstrap a Kubernetes cluster kubeconfig is written to /etc/rancher/k3s/k3s.yaml.

    1. sudo k3s kubectl get node
  2. (Optional) Check your cluster:

    1. sudo k3s kubectl get node

    K3s embeds the popular kubectl command directly in the binaries, so you may immediately interact with the cluster through it.

  3. (Optional) Run the below command on a different node. NODE_TOKEN comes from /var/lib/rancher/k3s/server/node-token on your server:

    1. sudo k3s agent --server https://myserver:6443 --token {YOUR_NODE_TOKEN}

K3s on Windows Subsystem for Linux (WSL)

1. Setting up a cluster on K3s on Windows Subsystem for Linux (WSL)

The Windows Subsystem for Linux (WSL) lets developers run a GNU/Linux environment—including most command-line tools, utilities, and applications— directly on Windows, unmodified, without the overhead of a traditional virtual machine or dualboot setup.

The full instructions for installing WSL can be found on the official Windows site.

The following steps summarize what you’ll need to set up WSL and then K3s on WSL.

  1. Install [WSL] by following the official docs.

  2. As per the official instructions, update WSL and download your preferred distibution:

References:

2. Creating a cluster on K3s on WSL

Below are the steps to create a cluster on K3s in WSL

  1. To create a Kubernetes cluster on K3s on WSL, run the following command:

    1. sudo ./k3s server

    This will bootstrap a Kubernetes cluster but you will cannot yet access from your Windows machine to the cluster itself.

    Note: You can’t install K3s using the curl script because there is no supervisor (systemd or openrc) in WSL.

  2. Download the K3s binary from https://github.com/rancher/k3s/releases/latest. Then, inside the directory where you download the K3s binary to, run this command to add execute permission to the K3s binary:

    1. chmod +x k3s
  3. Start K3s:

    1. sudo ./k3s server

3. Setting up access to WSL instance

To set up access to your WSL instance:

  1. Copy /etc/rancher/k3s/k3s.yaml from WSL to $HOME/.kube/config.

  2. Edit the copied file by changing the server URL from https://localhost:6443 to the IP of the your WSL instance (ip addr show dev eth0) (For example, https://192.168.170.170:6443.)

  3. Run kubectl in a Windows terminal. If you don’t kubectl installed, follow the official Kubernetes on Windows instructions.

K3ai [alpha]

K3ai is a lightweight “infrastructure in a box” designed specifically to install and configure AI tools and platforms on portable hardware, such as laptops and edge devices. This enables users to perform quick experimentations with Kubeflow on a local cluster.

K3ai’s main goal is to provide a quick way to install Kubernetes (K3s-based) and Kubeflow Pipelines with NVIDIA GPU support and TensorFlow Serving with just one line. (For Kubeflow and other component support, check K3ai’s website for updates.) To install Kubeflow Pipelines using K3ai, run the following commands:

  • With CPU-only support:
  1. curl -sfL https://get.k3ai.in | bash -s -- --cpu --plugin_kfpipelines
  • With GPU support:
  1. curl -sfL https://get.k3ai.in | bash -s -- --gpu --plugin_kfpipelines

For more information about K3ai, refer to the official documentation.

Deploying Kubeflow Pipelines

The installation process for Kubeflow Pipelines is the same for all three environments covered in this guide: kind, K3s, and K3ai.

Note: Process Namespace Sharing (PNS) is not mature in Argo yet - for more information go to Argo Executors and reference “pns executors” in any issue you may come across when using PNS.

  1. To deploy the Kubeflow Pipelines, run the following commands:

    1. # env/platform-agnostic-pns hasn't been publically released, so you will install it from master
    2. export PIPELINE_VERSION=1.4.1
    3. kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=$PIPELINE_VERSION"
    4. kubectl wait --for condition=established --timeout=60s crd/applications.app.k8s.io
    5. kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/env/platform-agnostic-pns?ref=$PIPELINE_VERSION"

    The Kubeflow Pipelines deployment may take several minutes to complete.

  2. Verify that the Kubeflow Pipelines UI is accessible by port-forwarding:

    1. kubectl port-forward -n kubeflow svc/ml-pipeline-ui 8080:80

    Then, open the Kubeflow Pipelines UI at http://localhost:8080/ or - if you are using kind or K3s within a virtual machine - http://{YOUR_VM_IP_ADDRESS}:8080/

    Note that K3ai will automatically print the URL for the web UI at the end of the installation process.

    Note: kubectl apply -k accepts local paths and paths that are formatted as hashicorp/go-getter URLs. While the paths in the preceding commands look like URLs, they are not valid URLs.

Uninstalling Kubeflow Pipelines

Below are the steps to remove Kubeflow Pipelines on kind, K3s, or K3ai:

  • To uninstall Kubeflow Pipelines using your manifest file, run the following command, replacing {YOUR_MANIFEST_FILE} with the name of your manifest file:

    1. kubectl delete -k {YOUR_MANIFEST_FILE}`
  • To uninstall Kubeflow Pipelines using manifests from Kubeflow Pipelines’s GitHub repository, run these commands:

    1. export PIPELINE_VERSION=1.4.1
    2. kubectl delete -k "github.com/kubeflow/pipelines/manifests/kustomize/env/platform-agnostic-pns?ref=$PIPELINE_VERSION"
    3. kubectl delete -k "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=$PIPELINE_VERSION"
  • To uninstall Kubeflow Pipelines using manifests from your local repository or file system, run the following commands:

    1. kubectl delete -k manifests/kustomize/env/platform-agnostic-pns
    2. kubectl delete -k manifests/kustomize/cluster-scoped-resources

Last modified 03.03.2021: Move Kubeflow Pipelines under /components (#2505) (c34470b8)