手动转换

This tutorial shows how to setup OpenYurt cluster manually. The cluster used in this tutorial is a two-nodes ACK(version 1.14.8) cluster, and all the yaml files used in this tutorial can be found at config/setup/.

Label cloud nodes and edge nodes

When disconnected from the apiserver, only the pod running on the autonomous edge node will be prevented from being evicted from nodes. Therefore, we first need to divide nodes into two categories, the cloud node and the edge node, by using label openyurt.io/is-edge-worker. Assume that the given Kubernetes cluster has two nodes,

  1. $ kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. us-west-1.192.168.0.87 Ready <none> 3d23h v1.14.8-aliyun.1
  4. us-west-1.192.168.0.88 Ready <none> 3d23h v1.14.8-aliyun.1

and we will use node us-west-1.192.168.0.87 as the cloud node.

We label the cloud node with value false,

  1. $ kubectl label node us-west-1.192.168.0.87 openyurt.io/is-edge-worker=false
  2. node/us-west-1.192.168.0.87 labeled

and the edge node with value true.

  1. $ kubectl label node us-west-1.192.168.0.88 openyurt.io/is-edge-worker=true
  2. node/us-west-1.192.168.0.88 labeled

To active the autonomous mode, we annotate the edge node by typing following command

  1. $ kubectl annotate node us-west-1.192.168.0.88 node.beta.openyurt.io/autonomy=true
  2. node/us-west-1.192.168.0.88 annotated

Setup Yurt-controller-manager

Next, we need to deploy the Yurt controller manager, which prevents apiserver from evicting pods running on the autonomous edge nodes during disconnection.

  1. $ kubectl apply -f config/setup/yurt-controller-manager.yaml
  2. deployment.apps/yurt-controller-manager created

Note

Since Docker turn on pull rate limit on anonymous request. You may encouter error message like “You have reached your pull rate limit. xxxx”. In that case you will need to create a docker-registry secret to pull the image.

  1. $kc create secret docker-registry dockerpass --docker-username=your-docker-username --docker-password='your-docker-password' --docker-email='your-email-address' -n kube-system

Then edit the config/setup/yurt-controller-manager.yaml

  1. ...
  2. containers:
  3. - name: yurt-controller-manager
  4. image: openyurt/yurt-controller-manager:latest
  5. command:
  6. - yurt-controller-manager
  7. imagePullSecrets:
  8. - name: dockerpass

Disable the default nodelifecycle controller

To allow the yurt-controller-mamanger to work properly, we need to turn off the default nodelifecycle controller. The nodelifecycle controller can be disabled by restarting the kube-controller-manager with a proper --controllers option. Assume that the original option looks like --controllers=*,bootstrapsigner,tokencleaner, to disable the nodelifecycle controller, we change the option to --controllers=*,bootstrapsigner,tokencleaner,-nodelifecycle.

If the kube-controller-manager is deployed as a static pod on the master node, and you have the permission to log in to the master node, then above operations can be done by revising the file /etc/kubernetes/manifests/kube-controller-manager.yaml. After revision, the kube-controller-manager will be restarted automatically.

Setup Yurt-app-manager

Please refer to this document to setup Yurt-app-manager manually.

Setup Yurthub

After the Yurt controller manager is up and running, we will setup Yurthub as the static pod.

Before proceeding, we need to prepare the following items:

  1. Deploy global settings(i.e., RBAC, configmap) for yurthub.
  1. $ kubectl apply -f config/setup/yurthub-cfg.yaml
  1. Get the apiserver’s address (i.e., ip:port) and a bootstrap token, which will be used to replace the place holder in the template file config/setup/yurthub.yaml.

In the following command, we assume that the address of the apiserver is 1.2.3.4:5678 and bootstrap token is 07401b.f395accd246ae52d

  1. $ cat config/setup/yurthub.yaml |
  2. sed 's|__kubernetes_master_address__|1.2.3.4:5678|;
  3. s|__bootstrap_token__|07401b.f395accd246ae52d|' > /tmp/yurthub-ack.yaml &&
  4. scp -i <yourt-ssh-identity-file> /tmp/yurthub-ack.yaml root@us-west-1.192.168.0.88:/etc/kubernetes/manifests

and the Yurthub will be ready in minutes.

Setup Yurt-tunnel (Optional)

Please refer to this document to setup Yurttunnel manually.

Reset the Kubelet

By now, we have setup all required components for the OpenYurt cluster, next, we only need to reset the kubelet service to let it access the apiserver through the yurthub (The following steps assume that we are logged in to the edge node as the root user). As kubelet will connect to the Yurthub through http, so we create a new kubeconfig file for the kubelet service.

  1. mkdir -p /var/lib/openyurt
  2. cat << EOF > /var/lib/openyurt/kubelet.conf
  3. apiVersion: v1
  4. clusters:
  5. - cluster:
  6. server: http://127.0.0.1:10261
  7. name: default-cluster
  8. contexts:
  9. - context:
  10. cluster: default-cluster
  11. namespace: default
  12. user: default-auth
  13. name: default-context
  14. current-context: default-context
  15. kind: Config
  16. preferences: {}
  17. EOF

In order to let kubelet to use the new kubeconfig, we edit the drop-in file of the kubelet service (i.e., /etc/systemd/system/kubelet.service.d/10-kubeadm.conf)

  1. sed -i "s|KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=\/etc\/kubernetes\/bootstrap-kubelet.conf\ --kubeconfig=\/etc\/kubernetes\/kubelet.conf|KUBELET_KUBECONFIG_ARGS=--kubeconfig=\/var\/lib\/openyurt\/kubelet.conf|g" \
  2. /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Finally, we restart the kubelet service

  1. # assume we are logged in to the edge node already
  2. $ systemctl daemon-reload && systemctl restart kubelet