Add New Nodes to a Kubernetes Cluster

After you use KubeSphere for a certain period of time, it is likely that you need to scale out your cluster with an increasing number of workloads. From KubeSphere v3.0.0, you can use the brand-new installer KubeKey to add new nodes to a Kubernetes cluster. Fundamentally, the operation is based on Kubelet’s registration mechanism. In other words, the new nodes will automatically join the existing Kubernetes cluster. KubeSphere supports hybrid environments, which means the newly-added host OS can be CentOS or Ubuntu.

This tutorial demonstrates how to add new nodes to a single-node cluster. To scale out a multi-node cluster, the steps are basically the same.

Prerequisites

Add Worker Nodes to Kubernetes

  1. Retrieve your cluster information using KubeKey. The command below creates a configuration file (sample.yaml).

    1. ./kk create config --from-cluster

    Note

    You can skip this step if you already have the configuration file on your machine. For example, if you want to add nodes to a multi-node cluster which was set up by KubeKey, you might still have the configuration file if you have not deleted it.

  2. In the configuration file, put the information of your new nodes under hosts and roleGroups. The example adds two new nodes (i.e. node1 and node2). Here master1 is the existing node.

    1. ···
    2. spec:
    3. hosts:
    4. - {name: master1, address: 192.168.0.3, internalAddress: 192.168.0.3, user: root, password: [email protected]}
    5. - {name: node1, address: 192.168.0.4, internalAddress: 192.168.0.4, user: root, password: [email protected]}
    6. - {name: node2, address: 192.168.0.5, internalAddress: 192.168.0.5, user: root, password: [email protected]}
    7. roleGroups:
    8. etcd:
    9. - master1
    10. control-plane:
    11. - master1
    12. worker:
    13. - node1
    14. - node2
    15. ···

    Note

    • For more information about the configuration file, see Edit the configuration file.
    • You are not allowed to modify the host name of existing nodes when adding new nodes.
    • Replace the host name in the example with your own.
  3. Execute the following command:

    1. ./kk add nodes -f sample.yaml
  4. You will be able to see the new nodes and their information on the KubeSphere console when the installation finishes. On the Cluster Management page, select Cluster Nodes under Nodes from the left menu, or execute the command kubectl get node to check the changes.

    1. $ kubectl get node
    2. NAME STATUS ROLES AGE VERSION
    3. master1 Ready master,worker 20d v1.17.9
    4. node1 Ready worker 31h v1.17.9
    5. node2 Ready worker 31h v1.17.9

Add New Master Nodes for High Availability

The steps of adding master nodes are generally the same as adding worker nodes while you need to configure a load balancer for your cluster. You can use any cloud load balancers or hardware load balancers (for example, F5). In addition, Keepalived and HAproxy, or Nginx is also an alternative for creating highly available clusters.

  1. Create a configuration file using KubeKey.

    1. ./kk create config --from-cluster
  2. Open the file and you can see some fields are pre-populated with values. Add the information of new nodes and your load balancer to the file. Here is an example for your reference:

    1. apiVersion: kubekey.kubesphere.io/v1alpha1
    2. kind: Cluster
    3. metadata:
    4. name: sample
    5. spec:
    6. hosts:
    7. # You should complete the ssh information of the hosts
    8. - {name: master1, address: 172.16.0.2, internalAddress: 172.16.0.2, user: root, password: Testing123}
    9. - {name: master2, address: 172.16.0.5, internalAddress: 172.16.0.5, user: root, password: Testing123}
    10. - {name: master3, address: 172.16.0.6, internalAddress: 172.16.0.6, user: root, password: Testing123}
    11. - {name: worker1, address: 172.16.0.3, internalAddress: 172.16.0.3, user: root, password: Testing123}
    12. - {name: worker2, address: 172.16.0.4, internalAddress: 172.16.0.4, user: root, password: Testing123}
    13. - {name: worker3, address: 172.16.0.7, internalAddress: 172.16.0.7, user: root, password: Testing123}
    14. roleGroups:
    15. etcd:
    16. - master1
    17. - master2
    18. - master3
    19. control-plane:
    20. - master1
    21. - master2
    22. - master3
    23. worker:
    24. - worker1
    25. - worker2
    26. - worker3
    27. controlPlaneEndpoint:
    28. # If loadbalancer is used, 'address' should be set to loadbalancer's ip.
    29. domain: lb.kubesphere.local
    30. address: 172.16.0.253
    31. port: 6443
    32. kubernetes:
    33. version: v1.17.9
    34. imageRepo: kubesphere
    35. clusterName: cluster.local
    36. proxyMode: ipvs
    37. masqueradeAll: false
    38. maxPods: 110
    39. nodeCidrMaskSize: 24
    40. network:
    41. plugin: calico
    42. kubePodsCIDR: 10.233.64.0/18
    43. kubeServiceCIDR: 10.233.0.0/18
    44. registry:
    45. privateRegistry: ""
  3. Pay attention to the controlPlaneEndpoint field.

    1. controlPlaneEndpoint:
    2. # If you use a load balancer, the address should be set to the load balancer's ip.
    3. domain: lb.kubesphere.local
    4. address: 172.16.0.253
    5. port: 6443
    • The domain name of the load balancer is lb.kubesphere.local by default for internal access. You can change it based on your needs.
    • In most cases, you need to provide the private IP address of the load balancer for the field address. However, different cloud providers may have different configurations for load balancers. For example, if you configure a Server Load Balancer (SLB) on Alibaba Cloud, the platform assigns a public IP address to the SLB, which means you need to specify the public IP address for the field address.
    • The field port indicates the port of api-server.
  4. Save the file and execute the following command to apply the configuration.

    1. ./kk add nodes -f sample.yaml