Deploying using Keadm

Keadm is used to install the cloud and edge components of KubeEdge. It is not responsible for installing K8s and runtime.

Please refer kubernetes-compatibility to get Kubernetes compatibility and determine what version of Kubernetes would be installed.

Limitation

  • Currently support of keadm is available for Ubuntu and CentOS OS. RaspberryPi supports is in-progress.
  • Need super user rights (or root rights) to run.
  • The subcommand keadm beta is launched since v1.10.0. If you would like to use it, please use v1.10.0 or above of keadm.

Install keadm

Run the command below to one-click install keadm.

  1. # docker run --rm kubeedge/installation-package:v1.10.0 cat /usr/local/bin/keadm > /usr/local/bin/keadm && chmod +x /usr/local/bin/keadm

Setup Cloud Side (KubeEdge Master Node)

By default ports 10000 and 10002 in your cloudcore needs to be accessible for your edge nodes.

keadm init

keadm init will install cloudcore, generate the certs and install the CRDs. It also provides a flag by which a specific version can be set.

IMPORTANT NOTE:

  1. At least one of kubeconfig or master must be configured correctly, so that it can be used to verify the version and other info of the k8s cluster.
  2. Please make sure edge node can connect cloud node using local IP of cloud node, or you need to specify public IP of cloud node with --advertise-address flag.
  3. --advertise-address(only work since 1.3 release) is the address exposed by the cloud side (will be added to the SANs of the CloudCore certificate), the default value is the local IP.

    Example:

    1. # keadm init --advertise-address="THE-EXPOSED-IP"

    Output:

    1. Kubernetes version verification passed, KubeEdge installation will start...
    2. ...
    3. KubeEdge cloudcore is running, For logs visit: /var/log/kubeedge/cloudcore.log
  4. keadm init deploy cloudcore in binary process as system service, if you want to deploy cloudcore as container, please ref keadm beta init below.

keadm beta init

keadm beta init provides a solution for integrating Cloudcore Helm Chart.

Example:

  1. # keadm beta init --advertise-address="THE-EXPOSED-IP" --set cloudcore-tag=v1.9.0 --kube-config=/root/.kube/config

IMPORTANT NOTE:
1. Set flags --set key=value for cloudcore helm chart could refer to KubeEdge Cloudcore Helm Charts README.md.
2. You can start with one of Keadm’s built-in configuration profiles and then further customize the configuration for your specific needs. Currently, the built-in configuration profile keyword is version. Refer to version.yaml as values.yaml, you can make your custom values file here, and add flags like --profile version=v1.9.0 --set key=value to use this profile.

--external-helm-root flag provides a feature function to install the external helm charts like edgemesh.

Example:

  1. # keadm beta init --set server.advertiseAddress="THE-EXPOSED-IP" --set server.nodeName=allinone --kube-config=/root/.kube/config --force --external-helm-root=/root/go/src/github.com/edgemesh/build/helm --profile=edgemesh

If you are familiar with the helm chart installation, please refer to KubeEdge Helm Charts.

keadm beta manifest generate

You can also get the manifests with keadm beta manifest generate.

Example:

  1. # keadm beta manifest generate --advertise-address="THE-EXPOSED-IP" --kube-config=/root/.kube/config > kubeedge-cloudcore.yaml

Add –skip-crds flag to skip outputing the CRDs

Setup Edge Side (KubeEdge Worker Node)

Get Token From Cloud Side

Run keadm gettoken in cloud side will return the token, which will be used when joining edge nodes.

  1. # keadm gettoken
  2. 27a37ef16159f7d3be8fae95d588b79b3adaaf92727b72659eb89758c66ffda2.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTAyMTYwNzd9.JBj8LLYWXwbbvHKffJBpPd5CyxqapRQYDIXtFZErgYE

Join Edge Node

keadm join will install edgecore and mqtt. It also provides a flag by which a specific version can be set.

Example:

  1. # keadm join --cloudcore-ipport=192.168.20.50:10000 --token=27a37ef16159f7d3be8fae95d588b79b3adaaf92727b72659eb89758c66ffda2.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTAyMTYwNzd9.JBj8LLYWXwbbvHKffJBpPd5CyxqapRQYDIXtFZErgYE

IMPORTANT NOTE:
1. --cloudcore-ipport flag is a mandatory flag.
2. If you want to apply certificate for edge node automatically, --token is needed.
3. The kubeEdge version used in cloud and edge side should be same.

Output:

  1. Host has mosquit+ already installed and running. Hence skipping the installation steps !!!
  2. ...
  3. KubeEdge edgecore is running, For logs visit: /var/log/kubeedge/edgecore.log

You now can also use keadm beta join for better integration steps.

Enable kubectl logs Feature

Before deploying metrics-server , kubectl logs feature must be activated:

Note that if cloudcore is deployed using helm: - The stream certs are generated automatically and cloudStream feature is enabled by default. So, step 1-3 could be skipped unless customization is needed. - Also, step 4 could be finished by iptablesmanager component by default, manually operations are not needed. Refer to the cloudcore helm values. - Operations in step 5-6 related to cloudcore could also be skipped.

  1. Make sure you can find the kubernetes ca.crt and ca.key files. If you set up your kubernetes cluster by kubeadm , those files will be in /etc/kubernetes/pki/ dir.

    1. ls /etc/kubernetes/pki/
  2. Set CLOUDCOREIPS env. The environment variable is set to specify the IP address of cloudcore, or a VIP if you have a highly available cluster. Set CLOUDCORE_DOMAINS instead if Kubernetes uses domain names to communicate with cloudcore.

    1. export CLOUDCOREIPS="192.168.0.139"

    (Warning: the same terminal is essential to continue the work, or it is necessary to type this command again.) Checking the environment variable with the following command:

    1. echo $CLOUDCOREIPS
  3. Generate the certificates for CloudStream on cloud node, however, the generation file is not in the /etc/kubeedge/, we need to copy it from the repository which was git cloned from GitHub. Change user to root:

    1. sudo su

    Copy certificates generation file from original cloned repository:

    1. cp $GOPATH/src/github.com/kubeedge/kubeedge/build/tools/certgen.sh /etc/kubeedge/

    Change directory to the kubeedge directory:

    1. cd /etc/kubeedge/

    Generate certificates from certgen.sh

    1. /etc/kubeedge/certgen.sh stream
  4. It is needed to set iptables on the host. (This command should be executed on every apiserver deployed node.)(In this case, this the master node, and execute this command by root.) Run the following command on the host on which each apiserver runs:

    Note: You need to get the configmap first, which contains all the cloudcore ips and tunnel ports.

    1. kubectl get cm tunnelport -nkubeedge -oyaml
    2. apiVersion: v1
    3. kind: ConfigMap
    4. metadata:
    5. annotations:
    6. tunnelportrecord.kubeedge.io: '{"ipTunnelPort":{"192.168.1.16":10350, "192.168.1.17":10351},"port":{"10350":true, "10351":true}}'
    7. creationTimestamp: "2021-06-01T04:10:20Z"
    8. ...

    Then set all the iptables for multi cloudcore instances to every node that apiserver runs. The cloudcore ips and tunnel ports should be get from configmap above.

    1. iptables -t nat -A OUTPUT -p tcp --dport $YOUR-TUNNEL-PORT -j DNAT --to $YOUR-CLOUDCORE-IP:10003
    2. iptables -t nat -A OUTPUT -p tcp --dport 10350 -j DNAT --to 192.168.1.16:10003
    3. iptables -t nat -A OUTPUT -p tcp --dport 10351 -j DNAT --to 192.168.1.17:10003

    If you are not sure if you have setting of iptables, and you want to clean all of them. (If you set up iptables wrongly, it will block you out of your kubectl logs feature) The following command can be used to clean up iptables:

    1. iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
  5. Modify both /etc/kubeedge/config/cloudcore.yaml and /etc/kubeedge/config/edgecore.yaml on cloudcore and edgecore. Set up cloudStream and edgeStream to enable: true. Change the server IP to the cloudcore IP (the same as $CLOUDCOREIPS).

    Open the YAML file in cloudcore:

    1. sudo nano /etc/kubeedge/config/cloudcore.yaml

    Modify the file in the following part (enable: true):

    1. cloudStream:
    2. enable: true
    3. streamPort: 10003
    4. tlsStreamCAFile: /etc/kubeedge/ca/streamCA.crt
    5. tlsStreamCertFile: /etc/kubeedge/certs/stream.crt
    6. tlsStreamPrivateKeyFile: /etc/kubeedge/certs/stream.key
    7. tlsTunnelCAFile: /etc/kubeedge/ca/rootCA.crt
    8. tlsTunnelCertFile: /etc/kubeedge/certs/server.crt
    9. tlsTunnelPrivateKeyFile: /etc/kubeedge/certs/server.key
    10. tunnelPort: 10004

    Open the YAML file in edgecore:

    1. sudo nano /etc/kubeedge/config/edgecore.yaml

    Modify the file in the following part (enable: true), (server: 192.168.0.193:10004):

    1. edgeStream:
    2. enable: true
    3. handshakeTimeout: 30
    4. readDeadline: 15
    5. server: 192.168.0.139:10004
    6. tlsTunnelCAFile: /etc/kubeedge/ca/rootCA.crt
    7. tlsTunnelCertFile: /etc/kubeedge/certs/server.crt
    8. tlsTunnelPrivateKeyFile: /etc/kubeedge/certs/server.key
    9. writeDeadline: 15
  6. Restart all the cloudcore and edgecore.

    1. sudo su

    cloudCore in process mode:

    1. pkill cloudcore
    2. nohup cloudcore > cloudcore.log 2>&1 &

    or cloudCore in kubernetes deployment mode:

    1. kubectl -n kubeedge rollout restart deployment cloudcore

    edgeCore:

    1. systemctl restart edgecore.service

    If you fail to restart edgecore, check if that is because of kube-proxy and kill it. kubeedge reject it by default, we use a succedaneum called edgemesh

    Note: the importance is to avoid kube-proxy being deployed on edgenode. There are two methods to solve it:

    1. Add the following settings by calling kubectl edit daemonsets.apps -n kube-system kube-proxy:

      1. affinity:
      2. nodeAffinity:
      3. requiredDuringSchedulingIgnoredDuringExecution:
      4. nodeSelectorTerms:
      5. - matchExpressions:
      6. - key: node-role.kubernetes.io/edge
      7. operator: DoesNotExist
    2. If you still want to run kube-proxy, ask edgecore not to check the environment by adding the env variable in edgecore.service :

      1. sudo vi /etc/kubeedge/edgecore.service
      • Add the following line into the edgecore.service file:

        1. Environment="CHECK_EDGECORE_ENVIRONMENT=false"
        • The final file should look like this:

          1. Description=edgecore.service
          2. [Service]
          3. Type=simple
          4. ExecStart=/root/cmd/ke/edgecore --logtostderr=false --log-file=/root/cmd/ke/edgecore.log
          5. Environment="CHECK_EDGECORE_ENVIRONMENT=false"
          6. [Install]
          7. WantedBy=multi-user.target

Support Metrics-server in Cloud

  1. The realization of this function point reuses cloudstream and edgestream modules. So you also need to perform all steps of Enable kubectl logs Feature.

  2. Since the kubelet ports of edge nodes and cloud nodes are not the same, the current release version of metrics-server(0.3.x) does not support automatic port identification (It is the 0.4.0 feature), so you need to manually compile the image from master branch yourself now.

    Git clone latest metrics server repository:

    1. git clone https://github.com/kubernetes-sigs/metrics-server.git

    Go to the metrics server directory:

    1. cd metrics-server

    Make the docker image:

    1. make container

    Check if you have this docker image:

    1. docker images
    REPOSITORYTAGIMAGE IDCREATESIZE
    gcr.io/k8s-staging-metrics-serer/ metrics-serer-amd646d92704c5a68cd29a7a81bce68e6c2230c7a6912a24f71249d6919 seconds ago57.2MB
    metrics-server-kubeedgelatestaef0fa7a834c28 seconds ago57.2MB

    Make sure you change the tag of image by using its IMAGE ID to be compactable with image name in yaml file.

    1. docker tag a24f71249d69 metrics-server-kubeedge:latest
  3. Apply the deployment yaml. For specific deployment documents, you can refer to https://github.com/kubernetes-sigs/metrics-server/tree/master/manifests.

    Note: those iptables below must be applyed on the machine (to be exactly network namespace, so metrics-server needs to run in hostnetwork mode also) metric-server runs on.

    1. iptables -t nat -A OUTPUT -p tcp --dport 10350 -j DNAT --to $CLOUDCOREIPS:10003

    (To direct the request for metric-data from edgecore:10250 through tunnel between cloudcore and edgecore, the iptables is vitally important.)

    Before you deploy metrics-server, you have to make sure that you deploy it on the node which has apiserver deployed on. In this case, that is the master node. As a consequence, it is needed to make master node schedulable by the following command:

    1. kubectl taint nodes --all node-role.kubernetes.io/master-

    Then, in the deployment.yaml file, it must be specified that metrics-server is deployed on master node. (The hostname is chosen as the marked label.) In metrics-server-deployment.yaml

    1. spec:
    2. affinity:
    3. nodeAffinity:
    4. requiredDuringSchedulingIgnoredDuringExecution:
    5. nodeSelectorTerms:
    6. - matchExpressions:
    7. #Specify which label in [kubectl get nodes --show-labels] you want to match
    8. - key: kubernetes.io/hostname
    9. operator: In
    10. values:
    11. #Specify the value in key
    12. - charlie-latest

IMPORTANT NOTE: 1. Metrics-server needs to use hostnetwork network mode.

  1. Use the image compiled by yourself and set imagePullPolicy to Never.

  2. Enable the feature of –kubelet-use-node-status-port for Metrics-server

    Those settings need to be written in deployment yaml (metrics-server-deployment.yaml) file like this:

    1. volumes:
    2. # mount in tmp so we can safely use from-scratch images and/or read-only containers
    3. - name: tmp-dir
    4. emptyDir: {}
    5. hostNetwork: true #Add this line to enable hostnetwork mode
    6. containers:
    7. - name: metrics-server
    8. image: metrics-server-kubeedge:latest #Make sure that the REPOSITORY and TAG are correct
    9. # Modified args to include --kubelet-insecure-tls for Docker Desktop (don't use this flag with a real k8s cluster!!)
    10. imagePullPolicy: Never #Make sure that the deployment uses the image you built up
    11. args:
    12. - --cert-dir=/tmp
    13. - --secure-port=4443
    14. - --v=2
    15. - --kubelet-insecure-tls
    16. - --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalIP,Hostname
    17. - --kubelet-use-node-status-port #Enable the feature of --kubelet-use-node-status-port for Metrics-server
    18. ports:
    19. - name: main-port
    20. containerPort: 4443
    21. protocol: TCP

Reset KubeEdge Master and Worker nodes

Master

keadm reset will stop cloudcore and delete KubeEdge related resources from Kubernetes master like kubeedge namespace. It doesn’t uninstall/remove any of the pre-requisites.

It provides a flag for users to specify kubeconfig path, the default path is /root/.kube/config.

Example:

  1. # keadm reset --kube-config=$HOME/.kube/config

Node

keadm reset will stop edgecore and it doesn’t uninstall/remove any of the pre-requisites.

Last updated on Jan 1, 0001