containerd configuration

containerd is an industry-standard container runtime.

NOTE: In most use cases changes to the containerd configuration will not be required.

In order to make changes to containerd configuration first you need to generate a default containerd configuration by running:

  1. containerd config default > /etc/k0s/containerd.toml

This command will set the default values to /etc/k0s/containerd.toml.

k0s runs containerd with the following default values:

  1. /var/lib/k0s/bin/containerd \
  2. --root=/var/lib/k0s/containerd \
  3. --state=/var/lib/k0s/run/containerd \
  4. --address=/var/lib/k0s/run/containerd.sock \
  5. --config=/etc/k0s/containerd.toml

Before proceeding further, add the following default values to the configuration file:

  1. version = 2
  2. root = "/var/lib/k0s/containerd"
  3. state = "/var/lib/k0s/run/containerd"
  4. ...
  5. [grpc]
  6. address = "/var/lib/k0s/run/containerd.sock"

Next if you want to change CRI look into this section

[plugins."io.containerd.runtime.v1.linux"] shim = "containerd-shim" runtime = "runc"

Using gVisor

gVisor is an application kernel, written in Go, that implements a substantial portion of the Linux system call interface. It provides an additional layer of isolation between running applications and the host operating system.

First you must install the needed gVisor binaries into the host.

  1. (
  2. set -e
  3. URL=https://storage.googleapis.com/gvisor/releases/release/latest
  4. wget ${URL}/runsc ${URL}/runsc.sha512 \
  5. ${URL}/gvisor-containerd-shim ${URL}/gvisor-containerd-shim.sha512 \
  6. ${URL}/containerd-shim-runsc-v1 ${URL}/containerd-shim-runsc-v1.sha512
  7. sha512sum -c runsc.sha512 \
  8. -c gvisor-containerd-shim.sha512 \
  9. -c containerd-shim-runsc-v1.sha512
  10. rm -f *.sha512
  11. chmod a+rx runsc gvisor-containerd-shim containerd-shim-runsc-v1
  12. sudo mv runsc gvisor-containerd-shim containerd-shim-runsc-v1 /usr/local/bin
  13. )

See gVisor install docs

Next we need to prepare the config for k0s managed containerD to utilize gVisor as additional runtime:

  1. cat <<EOF | sudo tee /etc/k0s/containerd.toml
  2. disabled_plugins = ["restart"]
  3. [plugins.linux]
  4. shim_debug = true
  5. [plugins.cri.containerd.runtimes.runsc]
  6. runtime_type = "io.containerd.runsc.v1"
  7. EOF

Then we can start and join the worker as normally into the cluster:

  1. k0s worker $token

By default containerd uses nromal runc as the runtime. To make gVisor runtime usable for workloads we must register it to Kubernetes side:

  1. cat <<EOF | kubectl apply -f -
  2. apiVersion: node.k8s.io/v1beta1
  3. kind: RuntimeClass
  4. metadata:
  5. name: gvisor
  6. handler: runsc
  7. EOF

After this we can use it for our workloads:

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: nginx-gvisor
  5. spec:
  6. runtimeClassName: gvisor
  7. containers:
  8. - name: nginx
  9. image: nginx

We can verify the created nginx pod is actually running under gVisor runtime:

  1. # kubectl exec nginx-gvisor -- dmesg | grep -i gvisor
  2. [ 0.000000] Starting gVisor...

Using custom nvidia-container-runtime

By default CRI is set tu runC and if you want to configure Nvidia GPU support you will have to replace runc with nvidia-container-runtime as shown below:

  1. [plugins."io.containerd.runtime.v1.linux"]
  2. shim = "containerd-shim"
  3. runtime = "nvidia-container-runtime"

Note To run nvidia-container-runtime on your node please look here for detailed instructions.

After changes to the configuration, restart k0s and in this case containerd will be using newly configured runtime.