Set Up Your Notebooks

Getting started with Jupyter notebooks on Kubeflow

Your Kubeflow deployment includes services for spawning and managing Jupyternotebooks.

You can set up multiple notebook servers per Kubeflow deployment. Eachnotebook server can include multiple notebooks. Each notebook server belongsto a single namespace, which corresponds to the project group or team for thatserver.

This guide shows you how to set up a notebook server for your Jupyter notebooksin Kubeflow.

Quick guide

Summary of steps:

  • Follow the Kubeflow getting-started guideto set up your Kubeflow deployment and open the Kubeflow UI.

  • Click Notebook Servers in the left-hand panel of the Kubeflow UI.

  • Choose the namespace corresponding to your Kubeflow profile.

  • Click NEW SERVER to create a notebook server.

  • When the notebook server provisioning is complete, click CONNECT.

  • Click Upload to upload an existing notebook, or click New tocreate an empty notebook.

The rest of this page contains details of the above steps.

Install Kubeflow and open the Kubeflow UI

Follow the Kubeflow getting-started guide toset up your Kubeflow deployment in your environment of choice (locally, onpremises, or in the cloud).

When Kubeflow is running, you can access the Kubeflow user interface (UI). Ifthe getting-started guide for your chosen environment has instructions onaccessing the UI, follow those instructions. Alternatively, see the genericguide to accessing the Kubeflowcentral dashboard.

Create a Jupyter notebook server and add a notebook

  • Click Notebook Servers in the left-hand panel of the Kubeflow UI toaccess the Jupyter notebook services deployed with Kubeflow:Opening notebooks from the Kubeflow UI

  • Sign in:

    • On GCP, sign in using your Google Account. (If you have already logged into your Google Account you may not need to log in again.)
    • On all other platforms, sign in using any username and password.
  • Select a namespace:

    • Click the namespace dropdown to see the list of available namespaces.
    • Choose the namespace that corresponds to your Kubeflow profile. (Seethe page on multi-user isolationfor more information about namespaces.)Selecting a Kubeflow namespace
  • Click NEW SERVER on the Notebook Servers page:

The Kubeflow notebook servers page

You should see a page for entering details of your new server. Here is apartial screenshot of the page:

Form for adding a Kubeflow notebook server

  • Enter a name of your choice for the notebook server. The name caninclude letters and numbers, but no spaces. For example, my-first-notebook.

  • Kubeflow automatically updates the value in the namespace field tobe the same as the namespace that you selected in a previous step. Thisensures that the new notebook server is in a namespace that you can access.

  • Select a Docker image for the baselinedeployment of your notebook server. You can choose from a range of standard_images or specify a _custom image:

    • Standard: The standard Docker images include typical machine learning(ML) packages that you can use within your Jupyter notebooks onthis notebook server. Select an image from the Image dropdown menu.The image names indicate the following choices:

      • A TensorFlow version (for example, tensorflow-1.13.1). Kubeflow offersa CPU and a GPU image for each minor version of TensorFlow.
      • cpu or gpu, depending on whether you want to train your model on a CPUor a GPU.

        • If you choose a GPU image, make sure that you have GPUsavailable in your Kubeflow cluster. Run the following command to checkif there are any GPUs available:kubectl get nodes "-o=custom-columns=NAME:.metadata.name,GPU:.status.allocatable.nvidia.com/gpu"
        • If you have GPUs available, you can schedule your server on a GPU nodein the Extra Resources section at the bottom of the form. Forexample, to reserve two GPUs, enter the following JSON code:{"nvidia.com/gpu": 2}
      • Kubeflow version (for example, v0.5.0).
    • Custom: If you select the custom option, you must specify a Docker imagein the form registry/image:tag. For guidelines on creating a Dockerimage for your notebook, see the guide tocreating a custom Jupyter image.

  • Specify the total amount of CPU that your notebook server should reserve.The default is 0.5. For CPU-intensive jobs, you can choose more than one CPU(for example, 1.5).

  • Specify the total amount of memory (RAM) that your notebook server shouldreserve. The default is 1.0Gi.

  • Specify a workspace volume to hold your personal workspace for thisnotebook server. Kubeflow provisions aKubernetes persistent volume (PV) for your workspace volume. The PV ensures that you canretain data even if you destroy your notebook server.

    • The default is to create a new volume for your workspace with thefollowing configuration:

      • Name: The volume name is synced with the name of the notebook server,and has the form workspace-<server-name>.When you start typing the notebook server name, the volume name appears.You can edit the volume name, but if you later edit thenotebook server name, the volume name changes to match the notebookserver name.
      • Size: 10Gi
      • Access mode: ReadWriteOnce. This setting means that the volume can bemounted as read-write by a single node. See theKubernetes documentation for more details about access modes.
      • Mount point: /home/jovyan
    • Alternatively, you can point the notebook server at an existing volume byspecifying the name of the existing volume.
  • (Optional) Specify one or more data volumes if you want to store andaccess data from the notebooks on this notebook server. You can add newvolumes or specify existing volumes. Kubeflow provisions aKubernetes persistent volume (PV) for each of your data volumes.

  • (Optional) Specify one or more additional configurations as a list ofPodDefault labels. To make use of this option, you must create aPodDefault manifest. In the PodDefault manifest, you can specify configurationsincluding volumes, secrets, and environment variables.Kubeflow matches the labels in the configurations field againstthe properties specified in the PodDefault manifest. Kubeflow then injectsthese configurations into all the notebook Pods on this notebook server.

For example, enter the label addgcpsecret in the configurations fieldto match to a PodDefault manifest containing the following configuration:

  1. matchLabels:
  2. addgcpsecret: "true"

For indepth information on PodDefault usage, see the admission-webhookREADME.

  • (Optional) Change the setting for enable shared memory. The default isthat shared memory is enabled. Some libraries like PyTorch use shared memoryfor multiprocessing. Currently there is no implementation in Kubernetes toactivate shared memory. As a workaround, Kubeflow creates an empty directoryat /dev/shm.

  • (Optional) Specify one or more extra resources as a JSON string. TheJSON string must specify the value for one or more of thespec.containers[].resources.limits options described in the Kubernetesdocumentation.

In addition to the spec.containers[].resources.limits options shown in theabove Kubernetes document, you can also use the Extra Resources sectionto schedule GPUs for your notebook server, as discussed earlier in thesection on specifying your Docker image.

For example, you can reserve two GPUs by entering the following JSON code inthe Extra Resources section:

  1. {"nvidia.com/gpu": 2}

You can find more details about scheduling GPUs in the Kubernetesdocumentation.

  • Click LAUNCH. You should see an entry for your newnotebook server on the Notebook Servers page, with a spinning indicator inthe Status column. It can take a few minutes to set upthe notebook server.

    • You can check the status of your Pod by hovering your mouse cursor overthe icon in the Status column next to the entry for your notebookserver. For example, if the image is downloading then the status spinnerhas a tooltip that says ContainerCreating.

Alternatively, you can check the Pod status by entering the followingcommand:

  1. kubectl -n <NAMESPACE> describe pods jupyter-<USERNAME>

Where <NAMESPACE> is the namespace you specified earlier(default kubeflow) and <USERNAME> is the name you used to log in.A note for GCP users: If you have IAP turned on, the Pod hasa different name. For example, if you signed in as USER@DOMAIN.EXTthe Pod has a name of the following form:

  1. jupyter-accounts-2egoogle-2ecom-3USER-40DOMAIN-2eEXT
  • When the notebook server provisioning is complete, you should see an entryfor your server on the Notebook Servers page, with a check mark in theStatus column:

Opening notebooks from the Kubeflow UI

  • Click CONNECT to start the notebook server.

  • When the notebook server is running, you should see the Jupyter dashboardinterface. If you requested a new workspace, the dashboard should be emptyof notebooks:

Jupyter dashboard with no notebooks

  • Click Upload to upload an existing notebook, or click New tocreate an empty notebook. You can read about using notebooks in theJupyter documentation.

Experiment with your notebook

The default notebook image includes all the plugins that you need to train aTensorFlow model with Jupyter, includingTensorboardfor rich visualizations and insights into your model.

To test your Jupyter installation, you can run a basic ‘hello world’ program(adapted frommnist_softmax.py) as follows:

  • Use the Jupyter dashboard to create a new Python 3 notebook.

  • Copy the following code and paste it into a code block in your notebook:

  1. from tensorflow.examples.tutorials.mnist import input_data
  2. mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
  3. import tensorflow as tf
  4. x = tf.placeholder(tf.float32, [None, 784])
  5. W = tf.Variable(tf.zeros([784, 10]))
  6. b = tf.Variable(tf.zeros([10]))
  7. y = tf.nn.softmax(tf.matmul(x, W) + b)
  8. y_ = tf.placeholder(tf.float32, [None, 10])
  9. cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
  10. train_step = tf.train.GradientDescentOptimizer(0.05).minimize(cross_entropy)
  11. sess = tf.InteractiveSession()
  12. tf.global_variables_initializer().run()
  13. for _ in range(1000):
  14. batch_xs, batch_ys = mnist.train.next_batch(100)
  15. sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
  16. correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
  17. accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
  18. print("Accuracy: ", sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
  • Run the code. You should see a number of WARNING messages from TensorFlow,followed by a line showing a training accuracy something like this:
  1. Accuracy: 0.9012

Please note that when running on most cloud providers, the public IP address isexposed to the internet and is an unsecured endpoint by default.

Next steps

Feedback

Was this page helpful?

Glad to hear it! Please tell us how we can improve.

Sorry to hear that. Please tell us how we can improve.

Last modified 18.02.2020: Refactor multiuser guides (#1682) (688286b9)