Setting different container runtime with CRI

containerd

Docker 18.09 and up ship with containerd, so you should not need to install it manually. If you do not have containerd, you may install it by running the following:

  1. # Install containerd
  2. apt-get update && apt-get install -y containerd.io
  3. # Configure containerd
  4. mkdir -p /etc/containerd
  5. containerd config default > /etc/containerd/config.toml
  6. # Restart containerd
  7. systemctl restart containerd

When using containerd shipped with Docker, the cri plugin is disabled by default. You will need to update containerd’s configuration to enable KubeEdge to use containerd as its runtime:

  1. # Configure containerd
  2. mkdir -p /etc/containerd
  3. containerd config default > /etc/containerd/config.toml

Update the edgecore config file edgecore.yaml, specifying the following parameters for the containerd-based runtime:

  1. remoteRuntimeEndpoint: unix:///var/run/containerd/containerd.sock
  2. remoteImageEndpoint: unix:///var/run/containerd/containerd.sock
  3. runtimeRequestTimeout: 2
  4. podSandboxImage: k8s.gcr.io/pause:3.2
  5. runtimeType: remote

By default, the cgroup driver of cri is configured as cgroupfs. If this is not the case, you can switch to systemd manually in edgecore.yaml:

  1. modules:
  2. edged:
  3. cgroupDriver: systemd

Set systemd_cgroup to true in containerd’s configuration file (/etc/containerd/config.toml), and then restart containerd:

  1. # /etc/containerd/config.toml
  2. systemd_cgroup = true
  1. # Restart containerd
  2. systemctl restart containerd

Create the nginx application and check that the container is created with containerd on the edge side:

  1. kubectl apply -f $GOPATH/src/github.com/kubeedge/kubeedge/build/deployment.yaml
  2. deployment.apps/nginx-deployment created
  3. ctr --namespace=k8s.io container ls
  4. CONTAINER IMAGE RUNTIME
  5. 41c1a07fe7bf7425094a9b3be285c312127961c158f30fc308fd6a3b7376eab2 docker.io/library/nginx:1.15.12 io.containerd.runtime.v1.linux

NOTE: since cri doesn’t support multi-tenancy while containerd does, the namespace for containers are set to “k8s.io” by default. There is not a way to change that until support in cri has been implemented.

CRI-O

Follow the CRI-O install guide to setup CRI-O.

If your edge node is running on the ARM platform and your distro is ubuntu18.04, you might need to build the binaries form source and then install, since CRI-O packages are not available in the Kubic repository for this combination.

  1. git clone https://github.com/cri-o/cri-o
  2. cd cri-o
  3. make
  4. sudo make install
  5. # generate and install configuration files
  6. sudo make install.config

Set up CNI networking by following this guide: setup CNI. Update the edgecore config file, specifying the following parameters for the CRI-O-based runtime:

  1. remoteRuntimeEndpoint: unix:///var/run/crio/crio.sock
  2. remoteImageEndpoint: unix:////var/run/crio/crio.sock
  3. runtimeRequestTimeout: 2
  4. podSandboxImage: k8s.gcr.io/pause:3.2
  5. runtimeType: remote

By default, CRI-O uses cgroupfs as a cgroup driver manager. If you want to switch to systemd instead, update the CRI-O config file (/etc/crio/crio.conf.d/00-default.conf):

  1. # Cgroup management implementation used for the runtime.
  2. cgroup_manager = "systemd"

NOTE: the pause image should be updated if you are on ARM platform and the pause image you are using is not a multi-arch image. To set the pause image, update the CRI-O config file:

  1. pause_image = "k8s.gcr.io/pause-arm64:3.1"

Remember to update edgecore.yaml as well for your cgroup driver manager:

  1. modules:
  2. edged:
  3. cgroupDriver: systemd

Start CRI-O and edgecore services (assume both services are taken care of by systemd),

  1. sudo systemctl daemon-reload
  2. sudo systemctl enable crio
  3. sudo systemctl start crio
  4. sudo systemctl start edgecore

Create the application and check that the container is created with CRI-O on the edge side:

  1. kubectl apply -f $GOPATH/src/github.com/kubeedge/kubeedge/build/deployment.yaml
  2. deployment.apps/nginx-deployment created
  3. # crictl ps
  4. CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT POD ID
  5. 41c1a07fe7bf7 f6d22dec9931b 2 days ago Running nginx 0 51f727498b06f

Kata Containers

Kata Containers is a container runtime created to address security challenges in the multi-tenant, untrusted cloud environment. However, multi-tenancy support is still in KubeEdge’s backlog. If you have a downstream customized KubeEdge which supports multi-tenancy already then Kata Containers is a good option for a lightweight and secure container runtime.

Follow the install guide to install and configure containerd and Kata Containers.

If you have “kata-runtime” installed, run this command to check if your host system can run and create a Kata Container:

  1. kata-runtime kata-check

RuntimeClass is a feature for selecting the container runtime configuration to use to run a pod’s containers that is supported since containerd v1.2.0. If your containerd version is later than v1.2.0, you have two choices to configure containerd to use Kata Containers: - Kata Containers as a RuntimeClass - Kata Containers as a runtime for untrusted workloads

Suppose you have configured Kata Containers as the runtime for untrusted workloads. In order to verify whether it works on your edge node, you can run:

  1. cat nginx-untrusted.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: nginx-untrusted
  6. annotations:
  7. io.kubernetes.cri.untrusted-workload: "true"
  8. spec:
  9. containers:
  10. - name: nginx
  11. image: nginx
  1. kubectl create -f nginx-untrusted.yaml
  2. # verify the container is running with qemu hypervisor on edge side,
  3. ps aux | grep qemu
  4. root 3941 3.0 1.0 2971576 174648 ? Sl 17:38 0:02 /usr/bin/qemu-system-aarch64
  5. crictl pods
  6. POD ID CREATED STATE NAME NAMESPACE ATTEMPT
  7. b1c0911644cb9 About a minute ago Ready nginx-untrusted default 0

Virtlet

Make sure no libvirt is running on the worker nodes.

Steps

  1. Install CNI plugin:

    Download CNI plugin release and extract it:

    1. $ wget https://github.com/containernetworking/plugins/releases/download/v0.8.2/cni-plugins-linux-amd64-v0.8.2.tgz
    2. # Extract the tarball
    3. $ mkdir cni
    4. $ tar -zxvf v0.2.0.tar.gz -C cni
    5. $ mkdir -p /opt/cni/bin
    6. $ cp ./cni/* /opt/cni/bin/

    Configure CNI plugin:

    1. $ mkdir -p /etc/cni/net.d/
    2. $ cat >/etc/cni/net.d/bridge.conf <<EOF
    3. {
    4. "cniVersion": "0.3.1",
    5. "name": "containerd-net",
    6. "type": "bridge",
    7. "bridge": "cni0",
    8. "isGateway": true,
    9. "ipMasq": true,
    10. "ipam": {
    11. "type": "host-local",
    12. "subnet": "10.88.0.0/16",
    13. "routes": [
    14. { "dst": "0.0.0.0/0" }
    15. ]
    16. }
    17. }
    18. EOF
  2. Setup VM runtime: Use the script hack/setup-vmruntime.sh to set up a VM runtime. It makes use of the Arktos Runtime release to start three containers:

    1. vmruntime_vms
    2. vmruntime_libvirt
    3. vmruntime_virtlet

最近更新于 Mar 16, 2021