Create a crun demo for KubeEdge

1. Setup Cloud Side (KubeEdge Master Node)

Install Go

  1. $ wget https://golang.org/dl/go1.17.3.linux-amd64.tar.gz
  2. $ tar xzvf go1.17.3.linux-amd64.tar.gz
  3. $ export PATH=/home/${user}/go/bin:$PATH
  4. $ go version
  5. go version go1.17.3 linux/amd64

Install CRI-O

Please see CRI-O Installation Instructions.

  1. # Create the .conf file to load the modules at bootup
  2. cat <<EOF | sudo tee /etc/modules-load.d/crio.conf
  3. overlay
  4. br_netfilter
  5. EOF
  6. sudo modprobe overlay
  7. sudo modprobe br_netfilter
  8. # Set up required sysctl params, these persist across reboots.
  9. cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
  10. net.bridge.bridge-nf-call-iptables = 1
  11. net.ipv4.ip_forward = 1
  12. net.bridge.bridge-nf-call-ip6tables = 1
  13. EOF
  14. sudo sysctl --system
  15. export OS="xUbuntu_20.04"
  16. export VERSION="1.21"
  17. cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
  18. deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /
  19. EOF
  20. cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
  21. deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /
  22. EOF
  23. curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
  24. curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers-cri-o.gpg add -
  25. sudo apt-get update
  26. sudo apt-get install cri-o cri-o-runc
  27. sudo systemctl daemon-reload
  28. sudo systemctl enable crio --now
  29. sudo systemctl status cri-o

output:

  1. $ sudo systemctl status cri-o
  2. crio.service - Container Runtime Interface for OCI (CRI-O)
  3. Loaded: loaded (/lib/systemd/system/crio.service; enabled; vendor preset: enabled)
  4. Active: active (running) since Mon 2021-12-06 13:46:29 UTC; 16h ago
  5. Docs: https://github.com/cri-o/cri-o
  6. Main PID: 6868 (crio)
  7. Tasks: 14
  8. Memory: 133.2M
  9. CGroup: /system.slice/crio.service
  10. └─6868 /usr/bin/crio
  11. Dec 07 06:04:13 master crio[6868]: time="2021-12-07 06:04:13.694226800Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.4.1" id=1dbb722e-f031-410c-9f45-5d4b5760163e name=/runtime.v1alpha2.ImageServic>
  12. Dec 07 06:04:13 master crio[6868]: time="2021-12-07 06:04:13.695739507Z" level=info msg="Image status: &{0xc00047fdc0 map[]}" id=1dbb722e-f031-410c-9f45-5d4b5760163e name=/runtime.v1alpha2.ImageService/ImageSta>
  13. Dec 07 06:09:13 master crio[6868]: time="2021-12-07 06:09:13.698823984Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.4.1" id=661b754b-48a4-401b-a03f-7f7a553c7eb6 name=/runtime.v1alpha2.ImageServic>
  14. Dec 07 06:09:13 master crio[6868]: time="2021-12-07 06:09:13.703259157Z" level=info msg="Image status: &{0xc0004d98f0 map[]}" id=661b754b-48a4-401b-a03f-7f7a553c7eb6 name=/runtime.v1alpha2.ImageService/ImageSta>
  15. Dec 07 06:14:13 master crio[6868]: time="2021-12-07 06:14:13.707778419Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.4.1" id=8c7e4d36-871a-452e-ab55-707053604077 name=/runtime.v1alpha2.ImageServic>
  16. Dec 07 06:14:13 master crio[6868]: time="2021-12-07 06:14:13.709379469Z" level=info msg="Image status: &{0xc000035030 map[]}" id=8c7e4d36-871a-452e-ab55-707053604077 name=/runtime.v1alpha2.ImageService/ImageSta>
  17. Dec 07 06:19:13 master crio[6868]: time="2021-12-07 06:19:13.713158978Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.4.1" id=827b6315-f145-4f76-b8da-31653d5892a2 name=/runtime.v1alpha2.ImageServic>
  18. Dec 07 06:19:13 master crio[6868]: time="2021-12-07 06:19:13.714030148Z" level=info msg="Image status: &{0xc000162bd0 map[]}" id=827b6315-f145-4f76-b8da-31653d5892a2 name=/runtime.v1alpha2.ImageService/ImageSta>
  19. Dec 07 06:24:13 master crio[6868]: time="2021-12-07 06:24:13.716746612Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.4.1" id=1d53a917-4d98-4723-9ea8-a2951a472cff name=/runtime.v1alpha2.ImageServic>
  20. Dec 07 06:24:13 master crio[6868]: time="2021-12-07 06:24:13.717381882Z" level=info msg="Image status: &{0xc00042ce00 map[]}" id=1d53a917-4d98-4723-9ea8-a2951a472cff name=/runtime.v1alpha2.ImageService/ImageSta>

Install and Creating a cluster with kubeadm for K8s

Please see Creating a cluster with kubeadm.

Install K8s

  1. sudo apt-get update
  2. sudo apt-get install -y apt-transport-https curl
  3. echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
  4. sudo apt update
  5. K_VER="1.21.0-00"
  6. sudo apt install -y kubelet=${K_VER} kubectl=${K_VER} kubeadm=${K_VER}
  7. sudo apt-mark hold kubelet kubeadm kubectl

Create a cluster with kubeadm

  1. #kubernetes scheduler requires this setting to be done.
  2. $ sudo swapoff -a
  3. $ sudo vim /etc/fstab
  4. mark contain swapfile of row
  5. $ cat /etc/cni/net.d/100-crio-bridge.conf
  6. {
  7. "cniVersion": "0.3.1",
  8. "name": "crio",
  9. "type": "bridge",
  10. "bridge": "cni0",
  11. "isGateway": true,
  12. "ipMasq": true,
  13. "hairpinMode": true,
  14. "ipam": {
  15. "type": "host-local",
  16. "routes": [
  17. { "dst": "0.0.0.0/0" },
  18. { "dst": "1100:200::1/24" }
  19. ],
  20. "ranges": [
  21. [{ "subnet": "10.85.0.0/16" }],
  22. [{ "subnet": "1100:200::/24" }]
  23. ]
  24. }
  25. }
  26. $ export CIDR=10.85.0.0/16
  27. $ sudo kubeadm init --apiserver-advertise-address=192.168.122.160 --pod-network-cidr=$CIDR --cri-socket=/var/run/crio/crio.sock
  28. $ mkdir -p $HOME/.kube
  29. $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  30. $ sudo chown $(id -u):$(id -g) $HOME/.kube/config

output:

  1. Your Kubernetes control-plane has initialized successfully!
  2. To start using your cluster, you need to run the following as a regular user:
  3. mkdir -p $HOME/.kube
  4. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  5. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  6. You should now deploy a Pod network to the cluster.
  7. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  8. /docs/concepts/cluster-administration/addons/
  9. You can now join any number of machines by running the following on each node
  10. as root:
  11. kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:

  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config

Setup KubeEdge Master Node

Please see Deploying using Keadm.

IMPORTANT NOTE:

  1. At least one of kubeconfig or master must be configured correctly, so that it can be used to verify the version and other info of the k8s cluster.
  2. Please make sure edge node can connect cloud node using local IP of cloud node, or you need to specify public IP of cloud node with —advertise-address flag.
  3. --advertise-address(only work since 1.3 release) is the address exposed by the cloud side (will be added to the SANs of the CloudCore certificate), the default value is the local IP.
  1. wget https://github.com/kubeedge/kubeedge/releases/download/v1.8.0/keadm-v1.8.0-linux-amd64.tar.gz
  2. tar xzvf keadm-v1.8.0-linux-amd64.tar.gz
  3. cd keadm-v1.8.0-linux-amd64/keadm/
  4. sudo ./keadm init --advertise-address=192.168.122.160 --kube-config=/home/${user}/.kube/config

output:

  1. Kubernetes version verification passed, KubeEdge installation will start...
  2. ...
  3. KubeEdge cloudcore is running, For logs visit: /var/log/kubeedge/cloudcore.log

2. Setup Edge Side (KubeEdge Worker Node)

You can use the CRI-O install.sh script to install CRI-O and crun on Ubuntu 20.04.

  1. wget -qO- https://raw.githubusercontent.com/second-state/wasmedge-containers-examples/main/crio/install.sh | bash

Install Go on Edge Side

  1. $ wget https://golang.org/dl/go1.17.3.linux-amd64.tar.gz
  2. $ tar xzvf go1.17.3.linux-amd64.tar.gz
  3. $ export PATH=/home/${user}/go/bin:$PATH
  4. $ go version
  5. go version go1.17.3 linux/amd64

Get Token From Cloud Side

Run keadm gettoken in cloud side will return the token, which will be used when joining edge nodes.

  1. $ sudo ./keadm gettoken --kube-config=/home/${user}/.kube/config
  2. 27a37ef16159f7d3be8fae95d588b79b3adaaf92727b72659eb89758c66ffda2.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTAyMTYwNzd9.JBj8LLYWXwbbvHKffJBpPd5CyxqapRQYDIXtFZErgYE

Download Kubeedge and join edge nodes

Please see Setting different container runtime with CRI and Deploying using Keadm.

  1. $ wget https://github.com/kubeedge/kubeedge/releases/download/v1.8.0/keadm-v1.8.0-linux-amd64.tar.gz
  2. $ tar xzvf keadm-v1.8.0-linux-amd64.tar.gz
  3. $ cd keadm-v1.8.0-linux-amd64/keadm/
  4. $ sudo ./keadm join \
  5. --cloudcore-ipport=192.168.122.160:10000 \
  6. --edgenode-name=edge \
  7. --token=b4550d45b773c0480446277eed1358dcd8a02a0c214646a8082d775f9c447d81.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2Mzg4ODUzNzd9.A9WOYJFrgL2swVGnydpb4gMojyvyoNPCXaA4rXGowqU \
  8. --remote-runtime-endpoint=unix:///var/run/crio/crio.sock \
  9. --runtimetype=remote \
  10. --cgroupdriver=systemd

Output:

  1. Host has mosquit+ already installed and running. Hence skipping the installation steps !!!
  2. ...
  3. KubeEdge edgecore is running, For logs visit: /var/log/kubeedge/edgecore.log

Get Edge Node Status From Cloud Side

Output:

  1. kubectl get node
  2. NAME STATUS ROLES AGE VERSION
  3. edge Ready agent,edge 10s v1.19.3-kubeedge-v1.8.2
  4. master Ready control-plane,master 68m v1.21.0

3. Enable kubectl logs Feature

Before metrics-server deployed, kubectl logs feature must be activated, please see here.

4. Run a simple WebAssembly app

We can run the WebAssembly-based image from Docker Hub in the Kubernetes cluster.

Cloud Side

  1. $ kubectl run -it --restart=Never wasi-demo --image=hydai/wasm-wasi-example:with-wasm-annotation --annotations="module.wasm.image/variant=compat" /wasi_example_main.wasm 50000000
  2. Random number: -1694733782
  3. Random bytes: [6, 226, 176, 126, 136, 114, 90, 2, 216, 17, 241, 217, 143, 189, 123, 197, 17, 60, 49, 37, 71, 69, 67, 108, 66, 39, 105, 9, 6, 72, 232, 238, 102, 5, 148, 243, 249, 183, 52, 228, 54, 176, 63, 249, 216, 217, 46, 74, 88, 204, 130, 191, 182, 19, 118, 193, 77, 35, 189, 6, 139, 68, 163, 214, 231, 100, 138, 246, 185, 47, 37, 49, 3, 7, 176, 97, 68, 124, 20, 235, 145, 166, 142, 159, 114, 163, 186, 46, 161, 144, 191, 211, 69, 19, 179, 241, 8, 207, 8, 112, 80, 170, 33, 51, 251, 33, 105, 0, 178, 175, 129, 225, 112, 126, 102, 219, 106, 77, 242, 104, 198, 238, 193, 247, 23, 47, 22, 29]
  4. Printed from wasi: This is from a main function
  5. This is from a main function
  6. The env vars are as follows.
  7. The args are as follows.
  8. /wasi_example_main.wasm
  9. 50000000
  10. File content is This is in a file

The WebAssembly app of pod successfully deploy to edge node.

  1. $ kubectl describe pod wasi-demo
  2. Name: wasi-demo
  3. Namespace: default
  4. Priority: 0
  5. Node: edge/192.168.122.229
  6. Start Time: Mon, 06 Dec 2021 15:45:34 +0000
  7. Labels: run=wasi-demo
  8. Annotations: module.wasm.image/variant: compat
  9. Status: Succeeded
  10. IP:
  11. IPs: <none>
  12. Containers:
  13. wasi-demo:
  14. Container ID: cri-o://1ae4d0d7f671050331a17e9b61b5436bf97ad35ad0358bef043ab820aed81069
  15. Image: hydai/wasm-wasi-example:with-wasm-annotation
  16. Image ID: docker.io/hydai/wasm-wasi-example@sha256:525aab8d6ae8a317fd3e83cdac14b7883b92321c7bec72a545edf276bb2100d6
  17. Port: <none>
  18. Host Port: <none>
  19. Args:
  20. /wasi_example_main.wasm
  21. 50000000
  22. State: Terminated
  23. Reason: Completed
  24. Exit Code: 0
  25. Started: Mon, 06 Dec 2021 15:45:33 +0000
  26. Finished: Mon, 06 Dec 2021 15:45:33 +0000
  27. Ready: False
  28. Restart Count: 0
  29. Environment: <none>
  30. Mounts:
  31. /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bhszr (ro)
  32. Conditions:
  33. Type Status
  34. Initialized True
  35. Ready False
  36. PodScheduled True
  37. Volumes:
  38. kube-api-access-bhszr:
  39. Type: Projected (a volume that contains injected data from multiple sources)
  40. TokenExpirationSeconds: 3607
  41. ConfigMapName: kube-root-ca.crt
  42. ConfigMapOptional: <nil>
  43. DownwardAPI: true
  44. QoS Class: BestEffort
  45. Node-Selectors: <none>
  46. Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
  47. node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
  48. Events:
  49. Type Reason Age From Message
  50. ---- ------ ---- ---- -------

Edge Side

  1. $ sudo crictl ps -a
  2. CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
  3. 1ae4d0d7f6710 0423b8eb71e312b8aaa09a0f0b6976381ff567d5b1e5729bf9b9aa87bff1c9f3 16 minutes ago Exited wasi-demo 0 2bc2ac0c32eda
  4. 1e6c7cb6bc731 k8s.gcr.io/kube-proxy@sha256:2a25285ff19f9b4025c8e54dac42bb3cd9aceadc361f2570489b8d723cb77135 18 minutes ago Running kube-proxy 0 8b7e7388ad866

That’s it.

5. Demo Run Screen Recording

asciicast