OpenYurt + containerd + crun

In this article, we will introduce how to run a WasmEdge simple demo app with Containerd over OpenYurt.

Set up an OpenYurt Cluster

Here, we introduce two ways to set up an OpenYurt Cluster. The first one is to set up an OpenYurt Cluster from scratch, use yurtctl convert to realize a K8s Cluster conversion to an OpenYurt Cluster. The second one is to use the ability of OpenYurt Experience Center, which is easy to achieve an OpenYurt Cluster.

Prerequisite

OS/kernelPrivate IP/Public IP
MasterUbuntu 20.04.3 LTS/5.4.0-91-generic192.168.3.169/120.55.126.18
NodeUbuntu 20.04.3 LTS/5.4.0-91-generic192.168.3.170/121.43.113.152

It should be noted that some steps may differ slightly depending on the operating system differences. Please refer to the installation of OpenYurt and crun.

We use yurtctl convert to convert a K8s Cluster to OpenYurt Cluster, so we should set up a K8s Cluster. If you use yurtctl init/join to set up an OpenYurt Cluster, you can skip this step which introduces the process of installing K8s.

Find the difference between yurtctl convert/revert and yurtctl init/join, you can refer to the following two articles.

how use Yurtctl init/join

Conversion between OpenYurt and Kubernetes:yurtctl convert/revert

  • Close the swap space of the master and node firstly.
  1. sudo swapoff -a
  2. //verify
  3. free -m
  • Configure the file /etc/hosts of two nodes as the following.
  1. 192.168.3.169 oy-master
  2. 120.55.126.18 oy-master
  3. 92.168.3.170 oy-node
  4. 121.43.113.152 oy-node
  • Load the br_netfilter Kernel module and modify the Kernel parameter.
  1. //load the module
  2. sudo modprobe br_netfilter
  3. //verify
  4. lsmod | grep br_netfilter
  5. // create k8s.conf
  6. cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
  7. net.bridge.bridge-nf-call-ip6tables = 1
  8. net.bridge.bridge-nf-call-iptables = 1
  9. EOF
  10. sudo sysctl --system
  • Setup the value of rp-filter (adjusting the value of two parameters in /etc/sysctl.d/10-network-security.conf from 2 to 1 and setting up the value of /proc/sys/net/ipv4/ip_forward to 1)
  1. sudo vi /etc/sysctl.d/10-network-security.conf
  2. echo 1 > /proc/sys/net/ipv4/ip_forward
  3. sudo sysctl --system

Install containerd and modify the default configure of containerd

Use the following commands to install containerd on your edge node which will run a WasmEdge simple demo.

  1. export VERSION="1.5.7"
  2. echo -e "Version: $VERSION"
  3. echo -e "Installing libseccomp2 ..."
  4. sudo apt install -y libseccomp2
  5. echo -e "Installing wget"
  6. sudo apt install -y wget
  7. wget https://github.com/containerd/containerd/releases/download/v${VERSION}/cri-containerd-cni-${VERSION}-linux-amd64.tar.gz
  8. wget https://github.com/containerd/containerd/releases/download/v${VERSION}/cri-containerd-cni-${VERSION}-linux-amd64.tar.gz.sha256sum
  9. sha256sum --check cri-containerd-cni-${VERSION}-linux-amd64.tar.gz.sha256sum
  10. sudo tar --no-overwrite-dir -C / -xzf cri-containerd-cni-${VERSION}-linux-amd64.tar.gz
  11. sudo systemctl daemon-reload

As the crun project support WasmEdge as default, we just need to configure the containerd configuration for runc. So we need to modify the runc parameters in /etc/containerd/config.toml to curn and add pod_annotation.

  1. sudo mkdir -p /etc/containerd/
  2. sudo bash -c "containerd config default > /etc/containerd/config.toml"
  3. wget https://raw.githubusercontent.com/second-state/wasmedge-containers-examples/main/containerd/containerd_config.diff
  4. sudo patch -d/ -p0 < containerd_config.diff

After that, restart containerd to make the configuration take effect.

  1. systemctl start containerd

Install WasmEdge

Use the simple install script to install WasmEdge on your edge node.

  1. curl -sSf https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh | bash

Build and install crun

We need a crun binary that supports WasmEdge on the edge node. For now, the most straightforward approach is to build it yourself from the source. First, let’s ensure that crun dependencies are installed on your Ubuntu 20.04. For other Linux distributions, please see here.

  • Dependencies are required for the build
  1. sudo apt update
  2. sudo apt install -y make git gcc build-essential pkgconf libtool \
  3. libsystemd-dev libprotobuf-c-dev libcap-dev libseccomp-dev libyajl-dev \
  4. go-md2man libtool autoconf python3 automake
  • Configure, build, and install a crun binary with WasmEdge support.
  1. git clone https://github.com/containers/crun
  2. cd crun
  3. ./autogen.sh
  4. ./configure --with-wasmedge
  5. make
  6. sudo make install

From scratch set up an OpenYurt Cluster

In this demo, we will use two machines to set up an OpenYurt Cluster. One simulated cloud node is called Master, the other one simulated edge node is called Node. These two nodes form the simplest OpenYurt Cluster, where OpenYurt components run on.

Set up a K8s Cluster

Kubernetes version 1.18.9

  1. $ sudo apt-get update && sudo apt-get install -y ca-certificates curl software-properties-common apt-transport-https
  2. // add K8s source
  3. $ curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
  4. $ sudo tee /etc/apt/sources.list.d/kubernetes.list <<EOF
  5. $ deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
  6. // install K8s components 1.18.9
  7. $ sudo apt-get update && sudo apt-get install -y kubelet=1.18.9-00 kubeadm=1.18.9-00 kubectl=1.18.9-00
  8. // Initialize the master node
  9. $ sudo kubeadm init --pod-network-cidr 172.16.0.0/16 \
  10. --apiserver-advertise-address=192.168.3.167 \
  11. --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
  12. // join the work node
  13. $ kubeadm join 192.168.3.167:6443 --token 3zefbt.99e6denc1cxpk9fg \
  14. --discovery-token-ca-cert-hash sha256:8077d4e7dd6eee64a999d56866ae4336073ed5ffc3f23281d757276b08b9b195

Install yurtctl

Use the following command line to install yurtctl. The yurtctl CLI tool helps install/uninstall OpenYurt and also convert a standard Kubernetes cluster to an OpenYurt cluster.

  1. git clone https://github.com/openyurtio/openyurt.git
  2. cd openyurt
  3. make build WHAT=cmd/yurtctl

Install OpenYurt components

OpenYurt includes several components. YurtHub is the traffic proxy between the components on the node and Kube-apiserver. The YurtHub on the edge will cache the data returned from the cloud. Yurt controller supplements the upstream node controller to support edge computing requirements. TunnelServer connects with the TunnelAgent daemon running in each edge node via a reverse proxy to establish secure network access between the cloud site control plane and the edge nodes that are connected to the intranet. For more detailed information, you could refer to the OpenYurt docs.

  1. yurtctl convert --deploy-yurttunnel --cloud-nodes oy-master --provider kubeadm\
  2. --yurt-controller-manager-image="openyurt/yurt-controller-manager:v0.5.0"\
  3. --yurt-tunnel-agent-image="openyurt/yurt-tunnel-agent:v0.5.0"\
  4. --yurt-tunnel-server-image="openyurt/yurt-tunnel-server:v0.5.0"\
  5. --node-servant-image="openyurt/node-servant:latest"\
  6. --yurthub-image="openyurt/yurthub:v0.5.0"

We need to change the openyurt/node-server-version to latest here: —node-servant-image=”openyurt/node-servant:latest”

Actually, OpenYurt components 0.6.0 version is recommended to be installed and proved to be a success to run a WasmEdge demo. How to install OpenYurt:0.6.0, you can see this

Use OpenYurt Experience Center to quickly set up an OpenYurt Cluster

An easier way to set up an OpenYurt Cluster is to use the OpenYurt Experience Center. All you need to do is to sign up for an account for testing, and then you will get an OpenYurt cluster. Next, you could just use yurtctl join command line to join an edge node. See more OpenYurt Experience Center details here.

Run a simple WebAssembly app

Next, let’s run a WebAssembly program through the OpenYurt cluster as a container in the pod. This section will start off pulling this WebAssembly-based container image from Docker hub. If you want to learn how to compile, package, and publish the WebAssembly program as a container image to Docker hub, please refer to WasmEdge Book.

One thing is to note that because the kubectl run (version 1.18.9 ) missed annotations parameters, we need to adjust the command line here. If you are using OpenYurt Experience Center with OpenYurt 0.6.0 and Kubernetes 1.20.11 by default, please refer to the Kubernetes sections in the WasmEdge book to run the wasm app.

  1. // kubectl 1.18.9
  2. $ sudo kubectl run -it --rm --restart=Never wasi-demo --image=hydai/wasm-wasi-example:with-wasm-annotation --overrides='{"kind":"Pod","metadata":{"annotations":{"module.wasm.image/variant":"compat"}} , "apiVersion":"v1", "spec": {"hostNetwork": true}}' /wasi_example_main.wasm 50000000
  3. // kubectl 1.20.11
  4. $ sudo kubectl run -it --rm --restart=Never wasi-demo --image=hydai/wasm-wasi-example:with-wasm-annotation --annotations="module.wasm.image/variant=compat" --overrides='{"kind":"Pod", "apiVersion":"v1", "spec": {"hostNetwork": true}}' /wasi_example_main.wasm 50000000

The output from the containerized application is printed into the console. It is the same for all Kubernetes versions.

  1. Random number: 1123434661
  2. Random bytes: [25, 169, 202, 211, 22, 29, 128, 133, 168, 185, 114, 161, 48, 154, 56, 54, 99, 5, 229, 161, 225, 47, 85, 133, 90, 61, 156, 86, 3, 14, 10, 69, 185, 225, 226, 181, 141, 67, 44, 121, 157, 98, 247, 148, 201, 248, 236, 190, 217, 245, 131, 68, 124, 28, 193, 143, 215, 32, 184, 50, 71, 92, 148, 35, 180, 112, 125, 12, 152, 111, 32, 30, 86, 15, 107, 225, 39, 30, 178, 215, 182, 113, 216, 137, 98, 189, 72, 68, 107, 246, 108, 210, 148, 191, 28, 40, 233, 200, 222, 132, 247, 207, 239, 32, 79, 238, 18, 62, 67, 114, 186, 6, 212, 215, 31, 13, 53, 138, 97, 169, 28, 183, 235, 221, 218, 81, 84, 235]
  3. Printed from wasi: This is from a main function
  4. This is from a main function
  5. The env vars are as follows.
  6. The args are as follows.
  7. /wasi_example_main.wasm
  8. 50000000
  9. File content is This is in a file
  10. pod "wasi-demo" deleted

You can now check out the pod status through the Kubernetes command line.

  1. crictl ps -a

You can see the events from scheduling to running the WebAssembly workload in the log.

  1. CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
  2. 0c176ed65599a 0423b8eb71e31 8 seconds ago Exited wasi-demo