Customizing Kubeflow on GKE

Tailoring a GKE deployment of Kubeflow

Out of date

This guide contains outdated information pertaining to Kubeflow 1.0. This guide needs to be updated for Kubeflow 1.1.

This guide describes how to customize your deployment of Kubeflow on Google Kubernetes Engine (GKE) on Google Cloud.

Customizing Kubeflow before deployment

The Kubeflow deployment process is divided into two steps, build and apply, so that you can modify your configuration before deploying your Kubeflow cluster.

Follow the guide to deploying Kubeflow on Google Cloud. When you reach the setup and deploy step, skip the kfctl apply command and run the kfctl build command instead, as described in that step. Now you can edit the configuration files before deploying Kubeflow.

Customizing an existing deployment

You can also customize an existing Kubeflow deployment. In that case, this guide assumes that you have already followed the guide to deploying Kubeflow on Google Cloud and have deployed Kubeflow to a GKE cluster.

Before you start

This guide assumes the following settings:

  • The ${KF_DIR} environment variable contains the path to your Kubeflow application directory, which holds your Kubeflow configuration files. For example, /opt/my-kubeflow/.

    1. export KF_DIR=<path to your Kubeflow application directory>
  • The ${CONFIG_FILE} environment variable contains the path to your Kubeflow configuration file.

    1. export CONFIG_FILE=${KF_DIR}/kfctl_gcp_iap.v1.0.2.yaml
  • The ${KF_NAME} environment variable contains the name of your Kubeflow deployment. You can find the name in your ${CONFIG_FILE} configuration file, as the value for the metadata.name key.

    1. export KF_NAME=<the name of your Kubeflow deployment>
  • The ${PROJECT} environment variable contains the ID of your Google Cloud project. You can find the project ID in your ${CONFIG_FILE} configuration file, as the value for the project key.

    1. export PROJECT=<your Google Cloud project ID>
  • For further background about the above settings, see the guide to deploying Kubeflow with the CLI.

Customizing Google Cloud resources

To customize Google Cloud resources, such as your Kubernetes Engine cluster, you can modify the Deployment Manager configuration settings in ${KF_DIR}/gcp_config.

After modifying your existing configuration, run the following command to apply the changes:

  1. cd ${KF_DIR}
  2. kfctl apply -V -f ${CONFIG_FILE}

Alternatively, you can use Deployment Manager directly:

  1. cd ${KF_DIR}/gcp_config
  2. gcloud deployment-manager --project=${PROJECT} deployments update ${KF_NAME} --config=cluster-kubeflow.yaml

Some changes (such as the VM service account for Kubernetes Engine) can only be set at creation time; in this case you need to tear down your deployment before recreating it:

  1. cd ${KF_DIR}
  2. kfctl delete -f ${CONFIG_FILE}
  3. kfctl apply -V -f ${CONFIG_FILE}

Customizing Kubernetes resources

You can use kustomize to customize Kubeflow. Make sure that you have the minimum required version of kustomize: 2.0.3 or later. For more information about kustomize in Kubeflow, see how Kubeflow uses kustomize.

To customize the Kubernetes resources running within the cluster, you can modify the kustomize manifests in ${KF_DIR}/kustomize.

For example, to modify settings for the Jupyter web app:

  1. Open ${KF_DIR}/kustomize/jupyter-web-app.yaml in a text editor.

  2. Find and replace the parameter values:

    1. apiVersion: v1
    2. data:
    3. ROK_SECRET_NAME: secret-rok-{username}
    4. UI: default
    5. clusterDomain: cluster.local
    6. policy: Always
    7. prefix: jupyter
    8. kind: ConfigMap
    9. metadata:
    10. labels:
    11. app: jupyter-web-app
    12. kustomize.component: jupyter-web-app
    13. name: jupyter-web-app-parameters
    14. namespace: kubeflow
  3. Redeploy Kubeflow using kfctl:

    1. cd ${KF_DIR}
    2. kfctl apply -V -f ${CONFIG_FILE}

    Or use kubectl directly:

    1. cd ${KF_DIR}/kustomize
    2. kubectl apply -f jupyter-web-app.yaml

Common customizations

Add users to Kubeflow

You must grant each user the minimal permission scope that allows them to connect to the Kubernetes cluster.

For Google Cloud, you should grant the following Cloud Identity and Access Management (IAM) roles.

In the following commands, replace [PROJECT] with your Google Cloud project and replace [EMAIL] with the user’s email address:

  • To access the Kubernetes cluster, the user needs the Kubernetes Engine Cluster Viewer role:

    1. gcloud projects add-iam-policy-binding [PROJECT] --member=user:[EMAIL] --role=roles/container.clusterViewer
  • To access the Kubeflow UI through IAP, the user needs the IAP-secured Web App User role:

    1. gcloud projects add-iam-policy-binding [PROJECT] --member=user:[EMAIL] --role=roles/iap.httpsResourceAccessor

    Note, you need to grant the user IAP-secured Web App User role even if the user is already an owner or editor of the project. IAP-secured Web App User role is not implied by the Project Owner or Project Editor roles.

  • To be able to run gcloud container clusters get-credentials and see logs in Cloud Logging (formerly Stackdriver), the user needs viewer access on the project:

    1. gcloud projects add-iam-policy-binding [PROJECT] --member=user:[EMAIL] --role=roles/viewer

Alternatively, you can also grant these roles on the IAM page in the Cloud Console. Make sure you are in the same project as your Kubeflow deployment.

Add GPU nodes to your cluster

To add GPU accelerators to your Kubeflow cluster, you have the following options:

  • Pick a Google Cloud zone that provides NVIDIA Tesla K80 Accelerators (nvidia-tesla-k80).
  • Or disable node-autoprovisioning in your Kubeflow cluster.
  • Or change your node-autoprovisioning configuration.

To see which accelerators are available in each zone, run the following command:

  1. gcloud compute accelerator-types list

To disable node-autoprovisioning, run kfctl build as described above. Then edit ${KF_DIR}/gcp_config/cluster-kubeflow.yaml and set enabled to false:

  1. ...
  2. gpu-type: nvidia-tesla-k80
  3. autoprovisioning-config:
  4. enabled: false
  5. ...

You must also set gpu-pool-initialNodeCount.

Add GPU node pool to an existing kubeflow cluster

You can add a GPU node pool to your kubeflow cluster using the following command

  1. export GPU_POOL_NAME=<name of the new gpu pool>
  2. gcloud container node-pools create ${GPU_POOL_NAME} \
  3. --accelerator type=nvidia-tesla-k80,count=1 \
  4. --zone us-central1-a --cluster ${KF_NAME} \
  5. --num-nodes=1 --machine-type=n1-standard-4 --min-nodes=0 --max-nodes=5 --enable-autoscaling

After adding GPU nodes to your cluster, you need to install NVIDIA’s device drivers to the nodes. Google provides a DaemonSet that automatically installs the drivers for you.

To deploy the installation DaemonSet, run the following command:

  1. kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml

Add Cloud TPUs to your cluster

Set enable_tpu:true in ${KF_DIR}/gcp_config/cluster-kubeflow.yaml.

Specify a minimum CPU

Certain instruction sets or hardware features are only available on specific CPUs, so to ensure your cluster utilizes the appropriate hardware you need to set a minimum CPU value.

In brief, inside gcp_config/cluster.jinja change the minCpuPlatform property for the CPU node pool. For example, Intel Broadwell becomes Intel Skylake. Setting a minimum CPU needs to occur during cluster/node creation; it cannot be applied to an existing cluster/node.

More detailed instructions follow.

Add VMs with more CPUs or RAM

  • Change the machineType.
  • There are two node pools defined in the Google Cloud Deployment Manager:
  • When making changes to the node pools you also need to bump the pool-version in cluster-kubeflow.yaml before you update the deployment.

More customizations

Refer to the navigation panel on the left of these docs for more customizations, including using your own domain, setting up Cloud Filestore, and more.

Last modified 20.04.2021: Apply Docs Restructure to `v1.2-branch` = update `v1.2-branch` to current `master` v2 (#2612) (4e2602bd)