Running CUDA workloads

If you want to run CUDA workloads on the K3s container you need to customize the container.
CUDA workloads require the NVIDIA Container Runtime, so containerd needs to be configured to use this runtime.
The K3s container itself also needs to run with this runtime.
If you are using Docker you can install the NVIDIA Container Toolkit.

Building a customized K3s image

To get the NVIDIA container runtime in the K3s image you need to build your own K3s image.
The native K3s image is based on Alpine but the NVIDIA container runtime is not supported on Alpine yet.
To get around this we need to build the image with a supported base image.

Dockerfile

Dockerfile:

  1. ARG K3S_TAG="v1.21.2-k3s1"
  2. FROM rancher/k3s:$K3S_TAG as k3s
  3. FROM nvidia/cuda:11.2.0-base-ubuntu18.04
  4. ARG NVIDIA_CONTAINER_RUNTIME_VERSION
  5. ENV NVIDIA_CONTAINER_RUNTIME_VERSION=$NVIDIA_CONTAINER_RUNTIME_VERSION
  6. RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
  7. RUN apt-get update && \
  8. apt-get -y install gnupg2 curl
  9. # Install NVIDIA Container Runtime
  10. RUN curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | apt-key add -
  11. RUN curl -s -L https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/nvidia-container-runtime.list | tee /etc/apt/sources.list.d/nvidia-container-runtime.list
  12. RUN apt-get update && \
  13. apt-get -y install nvidia-container-runtime=${NVIDIA_CONTAINER_RUNTIME_VERSION}
  14. COPY --from=k3s / /
  15. RUN mkdir -p /etc && \
  16. echo 'hosts: files dns' > /etc/nsswitch.conf
  17. RUN chmod 1777 /tmp
  18. # Provide custom containerd configuration to configure the nvidia-container-runtime
  19. RUN mkdir -p /var/lib/rancher/k3s/agent/etc/containerd/
  20. COPY config.toml.tmpl /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl
  21. # Deploy the nvidia driver plugin on startup
  22. RUN mkdir -p /var/lib/rancher/k3s/server/manifests
  23. COPY device-plugin-daemonset.yaml /var/lib/rancher/k3s/server/manifests/nvidia-device-plugin-daemonset.yaml
  24. VOLUME /var/lib/kubelet
  25. VOLUME /var/lib/rancher/k3s
  26. VOLUME /var/lib/cni
  27. VOLUME /var/log
  28. ENV PATH="$PATH:/bin/aux"
  29. ENTRYPOINT ["/bin/k3s"]
  30. CMD ["agent"]

This Dockerfile is based on the K3s Dockerfile The following changes are applied:

  1. Change the base images to nvidia/cuda:11.2.0-base-ubuntu18.04 so the NVIDIA Container Runtime can be installed. The version of cuda:xx.x.x must match the one you’re planning to use.
  2. Add a custom containerd config.toml template to add the NVIDIA Container Runtime. This replaces the default runc runtime
  3. Add a manifest for the NVIDIA driver plugin for Kubernetes

Configure containerd

We need to configure containerd to use the NVIDIA Container Runtime. We need to customize the config.toml that is used at startup. K3s provides a way to do this using a config.toml.tmpl file. More information can be found on the K3s site.

  1. [plugins.opt]
  2. path = "{{ .NodeConfig.Containerd.Opt }}"
  3. [plugins.cri]
  4. stream_server_address = "127.0.0.1"
  5. stream_server_port = "10010"
  6. {{- if .IsRunningInUserNS }}
  7. disable_cgroup = true
  8. disable_apparmor = true
  9. restrict_oom_score_adj = true
  10. {{end}}
  11. {{- if .NodeConfig.AgentConfig.PauseImage }}
  12. sandbox_image = "{{ .NodeConfig.AgentConfig.PauseImage }}"
  13. {{end}}
  14. {{- if not .NodeConfig.NoFlannel }}
  15. [plugins.cri.cni]
  16. bin_dir = "{{ .NodeConfig.AgentConfig.CNIBinDir }}"
  17. conf_dir = "{{ .NodeConfig.AgentConfig.CNIConfDir }}"
  18. {{end}}
  19. [plugins.cri.containerd.runtimes.runc]
  20. # ---- changed from 'io.containerd.runc.v2' for GPU support
  21. runtime_type = "io.containerd.runtime.v1.linux"
  22. # ---- added for GPU support
  23. [plugins.linux]
  24. runtime = "nvidia-container-runtime"
  25. {{ if .PrivateRegistryConfig }}
  26. {{ if .PrivateRegistryConfig.Mirrors }}
  27. [plugins.cri.registry.mirrors]{{end}}
  28. {{range $k, $v := .PrivateRegistryConfig.Mirrors }}
  29. [plugins.cri.registry.mirrors."{{$k}}"]
  30. endpoint = [{{range $i, $j := $v.Endpoints}}{{if $i}}, {{end}}{{printf "%q" .}}{{end}}]
  31. {{end}}
  32. {{range $k, $v := .PrivateRegistryConfig.Configs }}
  33. {{ if $v.Auth }}
  34. [plugins.cri.registry.configs."{{$k}}".auth]
  35. {{ if $v.Auth.Username }}username = "{{ $v.Auth.Username }}"{{end}}
  36. {{ if $v.Auth.Password }}password = "{{ $v.Auth.Password }}"{{end}}
  37. {{ if $v.Auth.Auth }}auth = "{{ $v.Auth.Auth }}"{{end}}
  38. {{ if $v.Auth.IdentityToken }}identitytoken = "{{ $v.Auth.IdentityToken }}"{{end}}
  39. {{end}}
  40. {{ if $v.TLS }}
  41. [plugins.cri.registry.configs."{{$k}}".tls]
  42. {{ if $v.TLS.CAFile }}ca_file = "{{ $v.TLS.CAFile }}"{{end}}
  43. {{ if $v.TLS.CertFile }}cert_file = "{{ $v.TLS.CertFile }}"{{end}}
  44. {{ if $v.TLS.KeyFile }}key_file = "{{ $v.TLS.KeyFile }}"{{end}}
  45. {{end}}
  46. {{end}}
  47. {{end}}

The NVIDIA device plugin

To enable NVIDIA GPU support on Kubernetes you also need to install the NVIDIA device plugin. The device plugin is a deamonset and allows you to automatically:

  • Expose the number of GPUs on each nodes of your cluster
  • Keep track of the health of your GPUs
  • Run GPU enabled containers in your Kubernetes cluster.
  1. apiVersion: apps/v1
  2. kind: DaemonSet
  3. metadata:
  4. name: nvidia-device-plugin-daemonset
  5. namespace: kube-system
  6. spec:
  7. selector:
  8. matchLabels:
  9. name: nvidia-device-plugin-ds
  10. template:
  11. metadata:
  12. # Mark this pod as a critical add-on; when enabled, the critical add-on scheduler
  13. # reserves resources for critical add-on pods so that they can be rescheduled after
  14. # a failure. This annotation works in tandem with the toleration below.
  15. annotations:
  16. scheduler.alpha.kubernetes.io/critical-pod: ""
  17. labels:
  18. name: nvidia-device-plugin-ds
  19. spec:
  20. tolerations:
  21. # Allow this pod to be rescheduled while the node is in "critical add-ons only" mode.
  22. # This, along with the annotation above marks this pod as a critical add-on.
  23. - key: CriticalAddonsOnly
  24. operator: Exists
  25. containers:
  26. - env:
  27. - name: DP_DISABLE_HEALTHCHECKS
  28. value: xids
  29. image: nvidia/k8s-device-plugin:1.11
  30. name: nvidia-device-plugin-ctr
  31. securityContext:
  32. allowPrivilegeEscalation: true
  33. capabilities:
  34. drop: ["ALL"]
  35. volumeMounts:
  36. - name: device-plugin
  37. mountPath: /var/lib/kubelet/device-plugins
  38. volumes:
  39. - name: device-plugin
  40. hostPath:
  41. path: /var/lib/kubelet/device-plugins

Build the K3s image

To build the custom image we need to build K3s because we need the generated output.

Put the following files in a directory:

The build.sh script is configured using exports & defaults to v1.21.2+k3s1. Please set at least the IMAGE_REGISTRY variable! The script performs the following steps builds the custom K3s image including the nvidia drivers.

build.sh:

  1. #!/bin/bash
  2. set -euxo pipefail
  3. K3S_TAG=${K3S_TAG:="v1.21.2-k3s1"} # replace + with -, if needed
  4. IMAGE_REGISTRY=${IMAGE_REGISTRY:="MY_REGISTRY"}
  5. IMAGE_REPOSITORY=${IMAGE_REPOSITORY:="rancher/k3s"}
  6. IMAGE_TAG="$K3S_TAG-cuda"
  7. IMAGE=${IMAGE:="$IMAGE_REGISTRY/$IMAGE_REPOSITORY:$IMAGE_TAG"}
  8. NVIDIA_CONTAINER_RUNTIME_VERSION=${NVIDIA_CONTAINER_RUNTIME_VERSION:="3.5.0-1"}
  9. echo "IMAGE=$IMAGE"
  10. # due to some unknown reason, copying symlinks fails with buildkit enabled
  11. DOCKER_BUILDKIT=0 docker build \
  12. --build-arg K3S_TAG=$K3S_TAG \
  13. --build-arg NVIDIA_CONTAINER_RUNTIME_VERSION=$NVIDIA_CONTAINER_RUNTIME_VERSION \
  14. -t $IMAGE .
  15. docker push $IMAGE
  16. echo "Done!"

Run and test the custom image with k3d

You can use the image with k3d:

  1. k3d cluster create gputest --image=$IMAGE --gpus=1

Deploy a test pod:

  1. kubectl apply -f cuda-vector-add.yaml
  2. kubectl logs cuda-vector-add

This should output something like the following:

  1. $ kubectl logs cuda-vector-add
  2. [Vector addition of 50000 elements]
  3. Copy input data from the host memory to the CUDA device
  4. CUDA kernel launch with 196 blocks of 256 threads
  5. Copy output data from the CUDA device to the host memory
  6. Test PASSED
  7. Done

If the cuda-vector-add pod is stuck in Pending state, probably the device-driver daemonset didn’t get deployed correctly from the auto-deploy manifests. In that case, you can apply it manually via kubectl apply -f device-plugin-daemonset.yaml.

Known issues

  • This approach does not work on WSL2 yet. The NVIDIA driver plugin and container runtime rely on the NVIDIA Management Library (NVML) which is not yet supported. See the CUDA on WSL User Guide.

Acknowledgements

Most of the information in this article was obtained from various sources:

Authors


Last update: September 17, 2021