Getting Started with Katib

How to set up Katib and perform hyperparameter tuning

This guide shows how to get started with Katib and run a few examples using the command line and the Katib user interface (UI) to perform hyperparameter tuning.

For an overview of the concepts around Katib and hyperparameter tuning, check the introduction to Katib.

Katib setup

Let’s set up and configure Katib on your Kubernetes cluster with Kubeflow.

Installing Katib

You can skip this step if you have already installed Kubeflow. Your Kubeflow deployment includes Katib.

To install Katib as part of Kubeflow, follow the Kubeflow installation guide.

If you want to install Katib separately from Kubeflow, or to get a later version of Katib, run the following commands to install Katib directly from its repository on GitHub and deploy Katib to your cluster:

  1. git clone https://github.com/kubeflow/katib
  2. make deploy

Note: You should have kustomize version >= 3.2 to install Katib.

Setting up persistent volumes

If you used the above-mentioned script to deploy Katib, you can skip this step. This script deploys PersistentVolumeClaim (PVC) and PersistentVolume (PV) on your cluster.

You can skip this step if you’re using Kubeflow on Google Kubernetes Engine (GKE) or if your Kubernetes cluster includes a StorageClass for dynamic volume provisioning. For more information, check the Kubernetes documentation on dynamic provisioning and PV.

If you’re using Katib outside GKE and your cluster doesn’t include a StorageClass for dynamic volume provisioning, you must create a PV to bind to the PVC required by Katib.

After deploying Katib to your cluster, run the following command to create the PV:

  1. kubectl apply -f https://raw.githubusercontent.com/kubeflow/katib/master/manifests/v1beta1/components/mysql/pv.yaml

The above kubectl apply command uses a YAML file - pv.yaml - that defines the properties of the PV.

Accessing the Katib UI

You can use the Katib user interface (UI) to submit experiments and to monitor your results. The Katib home page within Kubeflow looks like this:

The Katib home page within the Kubeflow UI

If you installed Katib as part of Kubeflow, you can access the Katib UI from the Kubeflow UI:

  1. Open the Kubeflow UI. Check the guide to accessing the central dashboard.
  2. Click Katib in the left-hand menu.

Alternatively, you can set port-forwarding for the Katib UI service:

  1. kubectl port-forward svc/katib-ui -n kubeflow 8080:80

Then you can access the Katib UI at this URL:

  1. http://localhost:8080/katib/

Check this guide if you want to contribute to Katib UI.

Examples

This section introduces some examples that you can run to try Katib.

Example using random algorithm

You can create an experiment for Katib by defining the experiment in a YAML configuration file. The YAML file defines the configurations for the experiment, including the hyperparameter feasible space, optimization parameter, optimization goal, suggestion algorithm, and so on.

This example uses the YAML file for the random algorithm example.

The random algorithm example uses an MXNet neural network to train an image classification model using the MNIST dataset. You can check training container source code here. The experiment runs twelve training jobs with various hyperparameters and saves the results.

If you installed Katib as part of Kubeflow, you can’t run experiments in the Kubeflow namespace. Run the following commands to change namespace and launch an experiment using the random algorithm example:

  1. Download the example:

    1. curl https://raw.githubusercontent.com/kubeflow/katib/master/examples/v1beta1/random-example.yaml --output random-example.yaml
  2. Edit random-example.yaml and change the following line to use your Kubeflow user profile namespace:

    1. Namespace: kubeflow
  3. (Optional) Note: Katib’s experiments don’t work with Istio sidecar injection. If you installed Kubeflow using Istio config, you have to disable sidecar injection. To do that, specify this annotation: sidecar.istio.io/inject: "false" in your experiment’s trial template.

    For the provided random example with Kubernetes Job trial template, annotation should be under .trialSpec.spec.template.metadata.annotations. For the Kubeflow TFJob or other training operators check here how to set the annotation.

  4. Deploy the example:

    1. kubectl apply -f random-example.yaml

This example embeds the hyperparameters as arguments. You can embed hyperparameters in another way (for example, using environment variables) by using the template defined in the trialTemplate.trialSpec section of the YAML file. The template uses the unstructured format and substitutes parameters from the trialTemplate.trialParameters. Follow the trial template guide to know more about it.

This example randomly generates the following hyperparameters:

  • --lr: Learning rate. Type: double.
  • --num-layers: Number of layers in the neural network. Type: integer.
  • --optimizer: Optimization method to change the neural network attributes. Type: categorical.

Check the experiment status:

  1. kubectl -n <YOUR_USER_PROFILE_NAMESPACE> get experiment random-example -o yaml

The output of the above command should look similar to this:

  1. apiVersion: kubeflow.org/v1beta1
  2. kind: Experiment
  3. metadata:
  4. creationTimestamp: "2020-10-23T21:27:53Z"
  5. finalizers:
  6. - update-prometheus-metrics
  7. generation: 1
  8. name: random-example
  9. namespace: "<YOUR_USER_PROFILE_NAMESPACE>"
  10. resourceVersion: "147081981"
  11. selfLink: /apis/kubeflow.org/v1beta1/namespaces/<YOUR_USER_PROFILE_NAMESPACE>/experiments/random-example
  12. uid: fb3776e8-0f83-4783-88b6-80d06867ca0b
  13. spec:
  14. algorithm:
  15. algorithmName: random
  16. maxFailedTrialCount: 3
  17. maxTrialCount: 12
  18. metricsCollectorSpec:
  19. collector:
  20. kind: StdOut
  21. objective:
  22. additionalMetricNames:
  23. - Train-accuracy
  24. goal: 0.99
  25. metricStrategies:
  26. - name: Validation-accuracy
  27. value: max
  28. - name: Train-accuracy
  29. value: max
  30. objectiveMetricName: Validation-accuracy
  31. type: maximize
  32. parallelTrialCount: 3
  33. parameters:
  34. - feasibleSpace:
  35. max: "0.03"
  36. min: "0.01"
  37. name: lr
  38. parameterType: double
  39. - feasibleSpace:
  40. max: "5"
  41. min: "2"
  42. name: num-layers
  43. parameterType: int
  44. - feasibleSpace:
  45. list:
  46. - sgd
  47. - adam
  48. - ftrl
  49. name: optimizer
  50. parameterType: categorical
  51. resumePolicy: LongRunning
  52. trialTemplate:
  53. failureCondition: status.conditions.#(type=="Failed")#|#(status=="True")#
  54. primaryContainerName: training-container
  55. successCondition: status.conditions.#(type=="Complete")#|#(status=="True")#
  56. trialParameters:
  57. - description: Learning rate for the training model
  58. name: learningRate
  59. reference: lr
  60. - description: Number of training model layers
  61. name: numberLayers
  62. reference: num-layers
  63. - description: Training model optimizer (sdg, adam or ftrl)
  64. name: optimizer
  65. reference: optimizer
  66. trialSpec:
  67. apiVersion: batch/v1
  68. kind: Job
  69. spec:
  70. template:
  71. metadata:
  72. annotations:
  73. sidecar.istio.io/inject: "false"
  74. spec:
  75. containers:
  76. - command:
  77. - python3
  78. - /opt/mxnet-mnist/mnist.py
  79. - --batch-size=64
  80. - --lr=${trialParameters.learningRate}
  81. - --num-layers=${trialParameters.numberLayers}
  82. - --optimizer=${trialParameters.optimizer}
  83. image: docker.io/kubeflowkatib/mxnet-mnist:v1beta1-e294a90
  84. name: training-container
  85. restartPolicy: Never
  86. status:
  87. conditions:
  88. - lastTransitionTime: "2020-10-23T21:27:53Z"
  89. lastUpdateTime: "2020-10-23T21:27:53Z"
  90. message: Experiment is created
  91. reason: ExperimentCreated
  92. status: "True"
  93. type: Created
  94. - lastTransitionTime: "2020-10-23T21:28:13Z"
  95. lastUpdateTime: "2020-10-23T21:28:13Z"
  96. message: Experiment is running
  97. reason: ExperimentRunning
  98. status: "True"
  99. type: Running
  100. currentOptimalTrial:
  101. bestTrialName: random-example-smpc6ws2
  102. observation:
  103. metrics:
  104. - latest: "0.993170"
  105. max: "0.993170"
  106. min: "0.920293"
  107. name: Train-accuracy
  108. - latest: "0.978006"
  109. max: "0.978603"
  110. min: "0.959295"
  111. name: Validation-accuracy
  112. parameterAssignments:
  113. - name: lr
  114. value: "0.02889324678979306"
  115. - name: num-layers
  116. value: "5"
  117. - name: optimizer
  118. value: sgd
  119. runningTrialList:
  120. - random-example-26d5wzn2
  121. - random-example-98fpd29m
  122. - random-example-x2vjlzzv
  123. startTime: "2020-10-23T21:27:53Z"
  124. succeededTrialList:
  125. - random-example-n9c4j4cv
  126. - random-example-qfb68jpb
  127. - random-example-s96tq48v
  128. - random-example-smpc6ws2
  129. trials: 7
  130. trialsRunning: 3
  131. trialsSucceeded: 4

When the last value in status.conditions.type is Succeeded, the experiment is complete. You can check information about the best trial in status.currentOptimalTrial.

  • .currentOptimalTrial.bestTrialName is the trial name.

  • .currentOptimalTrial.observation.metrics is the max, min and latest recorded values for objective and additional metrics.

  • .currentOptimalTrial.parameterAssignments is the corresponding hyperparameter set.

In addition, status shows the experiment’s trials with their current status.

View the results of the experiment in the Katib UI:

  1. Open the Katib UI as described above.

  2. Click Hyperparameter Tuning on the Katib home page.

  3. Open the Katib menu panel on the left, then open the HP section and click Monitor:

    The Katib menu panel

  4. You should be able to view the list of experiments:

    The random example in the list of Katib experiments

  5. Click the name of the experiment, random-example.

  6. There should be a graph showing the level of validation and train accuracy for various combinations of the hyperparameter values (learning rate, number of layers, and optimizer):

    Graph produced by the random example

  7. Below the graph is a list of trials that ran within the experiment:

    Trials that ran during the experiment

  8. You can click on trial name to get metrics for the particular trial:

    Trials that ran during the experiment

TensorFlow example

If you installed Katib as part of Kubeflow, you can’t run experiments in the Kubeflow namespace. Run the following commands to launch an experiment using the Kubeflow’s TensorFlow training job operator, TFJob:

  1. Download tfjob-example.yaml:

    1. curl https://raw.githubusercontent.com/kubeflow/katib/master/examples/v1beta1/tfjob-example.yaml --output tfjob-example.yaml
  2. Edit tfjob-example.yaml and change the following line to use your Kubeflow user profile namespace:

    1. Namespace: kubeflow
  3. (Optional) Note: Katib’s experiments don’t work with Istio sidecar injection. If you installed Kubeflow using Istio config, you have to disable sidecar injection. To do that, specify this annotation: sidecar.istio.io/inject: "false" in your experiment’s trial template. For the provided TFJob example check here how to set the annotation.

  4. Deploy the example:

    1. kubectl apply -f tfjob-example.yaml
  5. You can check the status of the experiment:

    1. kubectl -n <YOUR_USER_PROFILE_NAMESPACE> get experiment tfjob-example -o yaml

Follow the steps as described for the random algorithm example above to obtain the results of the experiment in the Katib UI.

PyTorch example

If you installed Katib as part of Kubeflow, you can’t run experiments in the Kubeflow namespace. Run the following commands to launch an experiment using Kubeflow’s PyTorch training job operator, PyTorchJob:

  1. Download pytorchjob-example.yaml:

    1. curl https://raw.githubusercontent.com/kubeflow/katib/master/examples/v1beta1/pytorchjob-example.yaml --output pytorchjob-example.yaml
  2. Edit pytorchjob-example.yaml and change the following line to use your Kubeflow user profile namespace:

    1. Namespace: kubeflow
  3. (Optional) Note: Katib’s experiments don’t work with Istio sidecar injection. If you installed Kubeflow using Istio config, you have to disable sidecar injection. To do that, specify this annotation: sidecar.istio.io/inject: "false" in your experiment’s trial template. For the provided PyTorchJob example setting the annotation should be similar to TFJob

  4. Deploy the example:

    1. kubectl apply -f pytorchjob-example.yaml
  5. You can check the status of the experiment:

    1. kubectl -n <YOUR_USER_PROFILE_NAMESPACE> describe experiment pytorchjob-example

Follow the steps as described for the random algorithm example above to get the results of the experiment in the Katib UI.

Cleaning up

To remove Katib from your Kubernetes cluster run:

  1. make undeploy

Next steps

Last modified 20.04.2021: Apply Docs Restructure to `v1.2-branch` = update `v1.2-branch` to current `master` v2 (#2612) (4e2602bd)