Getting started with Katib

How to set up Katib and run some hyperparameter tuning examples

This page gets you started with Katib. Follow this guide to perform any additional setup you may need, depending on your environment, and to run a few examples using the command line and the Katib user interface (UI).

For an overview of the concepts around Katib and hyperparameter tuning, read the introduction to Katib.

Katib setup

This section describes some configurations that you may need to add to your Kubernetes cluster, depending on the way you’re using Kubeflow and Katib.

Installing Katib

You can skip this step if you have already installed Kubeflow. Your Kubeflow deployment includes Katib.

To install Katib as part of Kubeflow, follow the Kubeflow installation guide.

If you want to install Katib separately from Kubeflow, or to get a later version of Katib, run the following commands to install Katib directly from its repository on GitHub and deploy Katib to your cluster:

  1. git clone https://github.com/kubeflow/katib
  2. bash ./katib/scripts/v1alpha3/deploy.sh

Setting up persistent volumes

If you used above script to deploy Katib, you can skip this step. This script deploys PVC and PV on your cluster.

You can skip this step if you’re using Kubeflow on Google Kubernetes Engine (GKE) or if your Kubernetes cluster includes a StorageClass for dynamic volume provisioning. For more information, see the Kubernetes documentation on dynamic provisioning and persistent volumes.

If you’re using Katib outside GKE and your cluster doesn’t include a StorageClass for dynamic volume provisioning, you must create a persistent volume (PV) to bind to the persistent volume claim (PVC) required by Katib.

After deploying Katib to your cluster, run the following command to create the PV:

  1. kubectl apply -f https://raw.githubusercontent.com/kubeflow/katib/master/manifests/v1alpha3/pv/pv.yaml

The above kubectl apply command uses a YAML file (pv.yaml) that defines the properties of the PV.

Accessing the Katib UI

You can use the Katib user interface (UI) to submit experiments and to monitor your results. The Katib home page within Kubeflow looks like this:

The Katib home page within the Kubeflow UI

If you installed Katib as part of Kubeflow, you can access the Katib UI from the Kubeflow UI:

  1. Open the Kubeflow UI. See the guide to accessing the central dashboard.
  2. Click Katib in the left-hand menu.

Alternatively, you can set port-forwarding for the Katib UI service:

  1. kubectl port-forward svc/katib-ui -n kubeflow 8080:80

Then you can access the Katib UI at this URL:

  1. http://localhost:8080/katib/

Examples

This section introduces some examples that you can run to try Katib.

Example using random algorithm

You can create an experiment for Katib by defining the experiment in a YAML configuration file. The YAML file defines the configurations for the experiment, including the hyperparameter feasible space, optimization parameter, optimization goal, suggestion algorithm, and so on.

This example uses the YAML file for the random algorithm example.

The random algorithm example uses an MXNet neural network to train an image classification model using the MNIST dataset. You can check training container source code here. The experiment runs three training jobs with various hyperparameters and saves the results.

Run the following commands to launch an experiment using the random algorithm example:

  1. Download the example:

    1. curl https://raw.githubusercontent.com/kubeflow/katib/master/examples/v1alpha3/random-example.yaml --output random-example.yaml
  2. Edit random-example.yaml and change the following line to use your Kubeflow user profile namespace:

    1. Namespace: kubeflow
  3. Deploy the example:

    1. kubectl apply -f random-example.yaml

This example embeds the hyperparameters as arguments. You can embed hyperparameters in another way (for example, using environment variables) by using the template defined in the TrialTemplate.GoTemplate.RawTemplate section of the YAML file. The template uses the Go template format.

This example randomly generates the following hyperparameters:

  • --lr: Learning rate. Type: double.
  • --num-layers: Number of layers in the neural network. Type: integer.
  • --optimizer: Optimizer. Type: categorical.

Check the experiment status:

  1. kubectl -n <your user profile namespace> describe experiment random-example

The output of the above command should look similar to this:

  1. Name: random-example
  2. Namespace: <your user namespace>
  3. Labels: controller-tools.k8s.io=1.0
  4. Annotations: <none>
  5. API Version: kubeflow.org/v1alpha3
  6. Kind: Experiment
  7. Metadata:
  8. Creation Timestamp: 2019-12-22T22:53:25Z
  9. Finalizers:
  10. update-prometheus-metrics
  11. Generation: 2
  12. Resource Version: 720692
  13. Self Link: /apis/kubeflow.org/v1alpha3/namespaces/<your user namespace>/experiments/random-example
  14. UID: dc6bc15a-250d-11ea-8cae-42010a80010f
  15. Spec:
  16. Algorithm:
  17. Algorithm Name: random
  18. Algorithm Settings: <nil>
  19. Max Failed Trial Count: 3
  20. Max Trial Count: 12
  21. Metrics Collector Spec:
  22. Collector:
  23. Kind: StdOut
  24. Objective:
  25. Additional Metric Names:
  26. accuracy
  27. Goal: 0.99
  28. Objective Metric Name: Validation-accuracy
  29. Type: maximize
  30. Parallel Trial Count: 3
  31. Parameters:
  32. Feasible Space:
  33. Max: 0.03
  34. Min: 0.01
  35. Name: --lr
  36. Parameter Type: double
  37. Feasible Space:
  38. Max: 5
  39. Min: 2
  40. Name: --num-layers
  41. Parameter Type: int
  42. Feasible Space:
  43. List:
  44. sgd
  45. adam
  46. ftrl
  47. Name: --optimizer
  48. Parameter Type: categorical
  49. Resume Policy: LongRunning
  50. Trial Template:
  51. Go Template:
  52. Raw Template: apiVersion: batch/v1
  53. kind: Job
  54. metadata:
  55. name: {{.Trial}}
  56. namespace: {{.NameSpace}}
  57. spec:
  58. template:
  59. spec:
  60. containers:
  61. - name: {{.Trial}}
  62. image: docker.io/kubeflowkatib/mxnet-mnist-example
  63. command:
  64. - "python"
  65. - "/mxnet/example/image-classification/train_mnist.py"
  66. - "--batch-size=64"
  67. {{- with .HyperParameters}}
  68. {{- range .}}
  69. - "{{.Name}}={{.Value}}"
  70. {{- end}}
  71. {{- end}}
  72. restartPolicy: Never
  73. Status:
  74. Conditions:
  75. Last Transition Time: 2019-12-22T22:53:25Z
  76. Last Update Time: 2019-12-22T22:53:25Z
  77. Message: Experiment is created
  78. Reason: ExperimentCreated
  79. Status: True
  80. Type: Created
  81. Last Transition Time: 2019-12-22T22:55:10Z
  82. Last Update Time: 2019-12-22T22:55:10Z
  83. Message: Experiment is running
  84. Reason: ExperimentRunning
  85. Status: True
  86. Type: Running
  87. Current Optimal Trial:
  88. Observation:
  89. Metrics:
  90. Name: Validation-accuracy
  91. Value: 0.981091
  92. Parameter Assignments:
  93. Name: --lr
  94. Value: 0.025139701133432946
  95. Name: --num-layers
  96. Value: 4
  97. Name: --optimizer
  98. Value: sgd
  99. Start Time: 2019-12-22T22:53:25Z
  100. Trials: 12
  101. Trials Running: 2
  102. Trials Succeeded: 10
  103. Events: <none>

When the last value in Status.Conditions.Type is Succeeded, the experiment is complete.

View the results of the experiment in the Katib UI:

  1. Open the Katib UI as described above.

  2. Click Hyperparameter Tuning on the Katib home page.

  3. Open the Katib menu panel on the left, then open the HP section and click Monitor:

    The Katib menu panel

  4. You should see the list of experiments:

    The random example in the list of Katib experiments

  5. Click the name of the experiment, random-example.

  6. You should see a graph showing the level of validation and train accuracy for various combinations of the hyperparameter values (learning rate, number of layers, and optimizer):

    Graph produced by the random example

  7. Below the graph is a list of trials that ran within the experiment:

    Trials that ran during the experiment

  8. You can click on trial name to see metrics for the particular trial:

    Trials that ran during the experiment

TensorFlow example

Run the following commands to launch an experiment using the Kubeflow’s TensorFlow training job operator, TFJob:

  1. Download the tfjob-example.yaml file

    1. curl https://raw.githubusercontent.com/kubeflow/katib/master/examples/v1alpha3/tfjob-example.yaml --output tfjob-example.yaml
  2. Edit tfjob-example.yaml and change the following line to use your Kubeflow user profile namespace:

    1. Namespace: kubeflow
  3. Deploy the example:

    1. kubectl apply -f tfjob-example.yaml
  4. You can check the status of the experiment:

    1. kubectl -n <your user profile namespace> describe experiment tfjob-example

Follow the steps as described for the random algorithm example above, to see the results of the experiment in the Katib UI.

PyTorch example

Run the following commands to launch an experiment using Kubeflow’s PyTorch training job operator, PyTorchJob:

  1. Download the pytorchjob-example.yaml file

    1. curl https://raw.githubusercontent.com/kubeflow/katib/master/examples/v1alpha3/pytorchjob-example.yaml --output pytorchjob-example.yaml
  2. Edit pytorchjob-example.yaml and change the following line to use your Kubeflow user profile namespace:

    1. Namespace: kubeflow
  3. Deploy the example:

    1. kubectl apply -f pytorchjob-example.yaml
  4. You can check the status of the experiment:

    1. kubectl -n <your user profile namespace> describe experiment pytorchjob-example

Follow the steps as described for the random algorithm example above, to see the results of the experiment in the Katib UI.

Cleanup

Delete the installed components:

  1. bash ./scripts/v1alpha3/undeploy.sh

Next steps

Last modified 11.08.2020: Remove outdated banner for Katib docs (#2088) (5f29767a)