Runtime

k0s uses containerd as the default Container Runtime Interface (CRI) and runc as the default low-level runtime. In most cases they don’t require any configuration changes. However, if custom configuration is needed, this page provides some examples.

k0s_runtime

containerd configuration

By default k0s manages the full containerd configuration. User has the option of fully overriding, and thus also managing, the configuration themselves.

User managed containerd configuration

In the default k0s generated configuration there’s a “magic” comment telling k0s it is k0s managed:

  1. # k0s_managed=true

If you wish to take over the configuration management remove this line.

To make changes to containerd configuration you must first generate a default containerd configuration, with the default values set to /etc/k0s/containerd.toml:

  1. containerd config default > /etc/k0s/containerd.toml

k0s runs containerd with the following default values:

  1. /var/lib/k0s/bin/containerd \
  2. --root=/var/lib/k0s/containerd \
  3. --state=/run/k0s/containerd \
  4. --address=/run/k0s/containerd.sock \
  5. --config=/etc/k0s/containerd.toml

Next, add the following default values to the configuration file:

  1. version = 2
  2. root = "/var/lib/k0s/containerd"
  3. state = "/run/k0s/containerd"
  4. ...
  5. [grpc]
  6. address = "/run/k0s/containerd.sock"

k0s managed dynamic runtime configuration

From 1.27.1 onwards k0s enables dynamic configuration on containerd CRI runtimes. This works by k0s creating a special directory in /etc/k0s/containerd.d/ where user can drop-in partial containerd configuration snippets.

k0s will automatically pick up these files and adds these in containerd configuration imports list. If k0s sees the configuration drop-ins are CRI related configurations k0s will automatically collect all these into a single file and adds that as a single import file. This is to overcome some hard limitation on containerd 1.X versions. Read more at containerd#8056

Examples

Following chapters provide some examples how to configure different runtimes for containerd using k0s managed drop-in configurations.

Using gVisor

gVisor is an application kernel, written in Go, that implements a substantial portion of the Linux system call interface. It provides an additional layer of isolation between running applications and the host operating system.

  1. Install the needed gVisor binaries into the host.

    1. (
    2. set -e
    3. ARCH=$(uname -m)
    4. URL=https://storage.googleapis.com/gvisor/releases/release/latest/${ARCH}
    5. wget ${URL}/runsc ${URL}/runsc.sha512 \
    6. ${URL}/containerd-shim-runsc-v1 ${URL}/containerd-shim-runsc-v1.sha512
    7. sha512sum -c runsc.sha512 \
    8. -c containerd-shim-runsc-v1.sha512
    9. rm -f *.sha512
    10. chmod a+rx runsc containerd-shim-runsc-v1
    11. sudo mv runsc containerd-shim-runsc-v1 /usr/local/bin
    12. )

    Refer to the gVisor install docs for more information.

  2. Prepare the config for k0s managed containerD, to utilize gVisor as additional runtime:

    1. cat <<EOF | sudo tee /etc/k0s/containerd.d/gvisor.toml
    2. version = 2
    3. [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc]
    4. runtime_type = "io.containerd.runsc.v1"
    5. EOF
  3. Start and join the worker into the cluster, as normal:

    1. k0s worker $token
  4. Register containerd to the Kubernetes side to make gVisor runtime usable for workloads (by default, containerd uses normal runc as the runtime):

    1. cat <<EOF | kubectl apply -f -
    2. apiVersion: node.k8s.io/v1
    3. kind: RuntimeClass
    4. metadata:
    5. name: gvisor
    6. handler: runsc
    7. EOF

    At this point, you can use gVisor runtime for your workloads:

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: nginx-gvisor
    5. spec:
    6. runtimeClassName: gvisor
    7. containers:
    8. - name: nginx
    9. image: nginx
  5. (Optional) Verify that the created nginx pod is running under gVisor runtime:

    1. # kubectl exec nginx-gvisor -- dmesg | grep -i gvisor
    2. [ 0.000000] Starting gVisor...

Using nvidia-container-runtime

First, install the NVIDIA runtime components:

  1. distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
  2. && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
  3. && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
  4. sudo apt-get update && sudo apt-get install -y nvidia-container-runtime

Next, drop in the containerd runtime configuration snippet into /etc/k0s/containerd.d/nvidia.toml

  1. [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia]
  2. privileged_without_host_devices = false
  3. runtime_engine = ""
  4. runtime_root = ""
  5. runtime_type = "io.containerd.runc.v1"
  6. [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia.options]
  7. BinaryName = "/usr/bin/nvidia-container-runtime"

Create the needed RuntimeClass:

  1. cat <<EOF | kubectl apply -f -
  2. apiVersion: node.k8s.io/v1
  3. kind: RuntimeClass
  4. metadata:
  5. name: nvidia
  6. handler: nvidia
  7. EOF

Note Detailed instruction on how to run nvidia-container-runtime on your node is available here.

Using custom CRI runtime

Warning: You can use your own CRI runtime with k0s (for example, docker). However, k0s will not start or manage the runtime, and configuration is solely your responsibility.

Use the option --cri-socket to run a k0s worker with a custom CRI runtime. the option takes input in the form of <type>:<socket_path> (for type, use docker for a pure Docker setup and remote for anything else).

Using dockershim

To run k0s with a pre-existing Dockershim setup, run the worker with k0s worker --cri-socket docker:unix:///var/run/cri-dockerd.sock <token>. A detailed explanation on dockershim and a guide for installing cri-dockerd can be found in our k0s dockershim guide.