Join Nodes

There are two ways to join nodes into OpenYurt cluster in terms of the node status.

1. Joining nodes from scratch

1.1 yurtadm join

Users can join cloud nodes and edge nodes to the OpenYurt cluster using Yurtadm join. Note that when joining a node, the runtime needs to be installed on the node and the swap partition is turned off.

Execute the following command to join the edge node to cluster:

  1. $ _output/local/bin/linux/amd64/yurtadm join 1.2.3.4:6443 --token=zffaj3.a5vjzf09qn9ft3gt --node-type=edge --discovery-token-unsafe-skip-ca-verification --v=5

Execute the following command to join the cloud node to cluster:

  1. $ _output/local/bin/linux/amd64/yurtadm join 1.2.3.4:6443 --token=zffaj3.a5vjzf09qn9ft3gt --node-type=cloud --discovery-token-unsafe-skip-ca-verification --v=5

When the runtime of the edge node is containerd, the cri-socket parameter needs to be configured. For example, change the command above of joining the edge node to:

  1. $ _output/local/bin/linux/amd64/yurtadm join 1.2.3.4:6443 --token=zffaj3.a5vjzf09qn9ft3gt --node-type=edge --discovery-token-unsafe-skip-ca-verification --cri-socket=/run/containerd/containerd.sock --v=5
  • how to compile yurtadm binary, please refer to the link here

Explanation of parameters:

  • 1.2.3.4:6443: The address of apiserver
  • --token:bootstrap token
  • --node-type:openyurt node type,can be cloud or edge

The process of yurtadm join will automatically install the following k8s components:

  • kubeadm
  • kubectl
  • kubelet
  • kube-proxy

The process of yurtadm join will pull specially modified cni binaries, the modifications can be found here. If you want to use cni binaries that uses prepared beforehand, the cni binaries should be placed under /opt/cni/bin directory. Then configure yurtadm parameter --reuse-cni-bin=true for yurtadm join command.

Also, You can pre-place the kubelet and kubeadm components in the directories named by the PATH environment variable. However, there are restrictions on the version of kubelet and kubeadm. yurtadm will check if the major version and minor version are the same as the cluster kubernetes version(Follow semver specification).

1.2 yurtadm reset

yurtadm reset can be used when it is necessary to delete a node that was joined using yurtadm join. Here are the detailed steps:

In master:

  1. kubectl drain {NodeName} --delete-local-data --force --ignore-daemonsets
  2. kubectl delete node {NodeName}

In joined node:

  1. execute yurtadm reset
  1. yurtadm reset
  1. delete /etc/cni/net.d dir:
  1. rm -rf /etc/cni/net.d

2. Install OpenYurt node components

You should only install node components of OpenYurt on nodes that already have been joined in the Kubernetes cluster.

2.1 Label your node

OpenYurt distinguish cloud nodes and edge nodes through the node label openyurt.io/is-edge-worker. From this, it makes the decision that whether to evict Pods on this node. Assume we have a node named us-west-1.192.168.0.88 which is an edge node.

  1. $ kubectl label node us-west-1.192.168.0.88 openyurt.io/is-edge-worker=true
  2. node/us-west-1.192.168.0.88 labeled

If us-west-1.192.168.0.88 is a cloud node, then you should change the label from true to false

To further activate the node autonomous mode, we add an annotation to this edge node:

  1. $ kubectl annotate node us-west-1.192.168.0.88 node.beta.openyurt.io/autonomy=true
  2. node/us-west-1.192.168.0.88 annotated

Also if you want to take advantage of the unitization ability of OpenYurt, you can add this node to an nodePool.

  1. $ cat <<EOF | kubectl apply -f -
  2. apiVersion: apps.openyurt.io/v1alpha1
  3. kind: NodePool
  4. metadata:
  5. name: worker
  6. spec:
  7. type: Edge
  8. EOF
  9. $ kubectl label node us-west-1.192.168.0.87 apps.openyurt.io/desired-nodepool=worker

2.2 Setup Yurthub

Before proceeding, we need to prepare the following items:

  1. Get the apiserver’s address (i.e., ip:port) and a bootstrap token, which will be used to replace the placeholder in the template file config/setup/yurthub.yaml.

In the following command, we assume that the address of the apiserver is 1.2.3.4:5678 and bootstrap token is 07401b.f395accd246ae52d

  1. $ cat config/setup/yurthub.yaml |
  2. sed 's|__kubernetes_master_address__|1.2.3.4:5678|;
  3. s|__bootstrap_token__|07401b.f395accd246ae52d|' > /tmp/yurthub-ack.yaml &&
  4. scp -i <yourt-ssh-identity-file> /tmp/yurthub-ack.yaml root@us-west-1.192.168.0.88:/etc/kubernetes/manifests

and the Yurthub will be ready in minutes.

2.3 Configure Kubelet

we need to reset the kubelet service to let it access the apiserver through the yurthub (The following steps assume that we have logged on to the edge node as the root user). As kubelet will connect to the Yurthub through HTTP, so we create a new kubeconfig file for the kubelet service.

  1. mkdir -p /var/lib/openyurt
  2. cat << EOF > /var/lib/openyurt/kubelet.conf
  3. apiVersion: v1
  4. clusters:
  5. - cluster:
  6. server: http://127.0.0.1:10261
  7. name: default-cluster
  8. contexts:
  9. - context:
  10. cluster: default-cluster
  11. namespace: default
  12. user: default-auth
  13. name: default-context
  14. current-context: default-context
  15. kind: Config
  16. preferences: {}
  17. EOF

In order for let kubelet to use the new kubeconfig, we edit the drop-in file of the kubelet service (i.e., /etc/systemd/system/kubelet.service.d/10-kubeadm.conf or /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf for CentOS)

  1. sed -i "s|KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=\/etc\/kubernetes\/bootstrap-kubelet.conf\ --kubeconfig=\/etc\/kubernetes\/kubelet.conf|KUBELET_KUBECONFIG_ARGS=--kubeconfig=\/var\/lib\/openyurt\/kubelet.conf|g" \
  2. /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

then, we restart the kubelet service

  1. # assume we are logged in to the edge node already
  2. $ systemctl daemon-reload && systemctl restart kubelet

Finally, we need to make sure node is ready after kubelet restart.

  1. $ kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. us-west-1.192.168.0.87 Ready <none> 3d23h v1.20.11
  4. us-west-1.192.168.0.88 Ready <none> 3d23h v1.20.11

2.4 Restart Pods

After Yurthub installation and kubelet restart, all pods on this edge node should be recreated in order to make sure pods access kube-apiserver through Yurthub. Before performing this operation, confirm the impact on the production environment.

  1. $ kubectl get pod -A -o wide | grep us-west-1.192.168.0.88
  2. kube-system yurt-hub-us-west-1.192.168.0.88 1/1 Running 0 19d 172.16.0.32 us-west-1.192.168.0.88 <none> <none>
  3. kube-system coredns-qq6dk 1/1 Running 0 19d 10.148.2.197 us-west-1.192.168.0.88 <none> <none>
  4. kube-system kube-flannel-ds-j698r 1/1 Running 0 19d 172.16.0.32 us-west-1.192.168.0.88 <none> <none>
  5. kube-system kube-proxy-f5qvr 1/1 Running 0 19d 172.16.0.32 us-west-1.192.168.0.88 <none> <none>
  6. // then delete all pods above except yurthub pod.
  7. $ kubectl -n kube-system delete pod coredns-qq6dk kube-flannel-ds-j698r kube-proxy-f5qvr