Creating a single control-plane cluster with kubeadm

Creating a single control-plane cluster with kubeadm (EN) - 图1The kubeadm tool helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use kubeadm to set up a cluster that will pass the Kubernetes Conformance tests.
kubeadm also supports other cluster lifecycle functions, such as bootstrap tokens and cluster upgrades.

The kubeadm tool is good if you need:

  • A simple way for you to try out Kubernetes, possibly for the first time.
  • A way for existing users to automate setting up a cluster and test their application.
  • A building block in other ecosystem and/or installer tools with a larger scope.

You can install and use kubeadm on various machines: your laptop, a set of cloud servers, a Raspberry Pi, and more. Whether you’re deploying into the cloud or on-premises, you can integrate kubeadm into provisioning systems such as Ansible or Terraform.

Before you begin

To follow this guide, you need:

  • One or more machines running a deb/rpm-compatible Linux OS; for example: Ubuntu or CentOS.
  • 2 GiB or more of RAM per machine—any less leaves little room for your apps.
  • At least 2 CPUs on the machine that you use as a control-plane node.
  • Full network connectivity among all machines in the cluster. You can use either a public or a private network.

You also need to use a version of kubeadm that can deploy the version of Kubernetes that you want to use in your new cluster.

Kubernetes’ version and version skew support policy applies to kubeadm as well as to Kubernetes overall. Check that policy to learn about what versions of Kubernetes and kubeadm are supported. This page is written for Kubernetes v1.18.

The kubeadm tool’s overall feature state is General Availability (GA). Some sub-features are still under active development. The implementation of creating the cluster may change slightly as the tool evolves, but the overall implementation should be pretty stable.

Note: Any commands under kubeadm alpha are, by definition, supported on an alpha level.

Objectives

  • Install a single control-plane Kubernetes cluster or high-availability cluster
  • Install a Pod network on the cluster so that your Pods can talk to each other

Instructions

Installing kubeadm on your hosts

See “Installing kubeadm”.

Note:

If you have already installed kubeadm, run apt-get update && apt-get upgrade or yum update to get the latest version of kubeadm.

When you upgrade, the kubelet restarts every few seconds as it waits in a crashloop for kubeadm to tell it what to do. This crashloop is expected and normal. After you initialize your control-plane, the kubelet runs normally.

Initializing your control-plane node

The control-plane node is the machine where the control plane components run, including etcdConsistent and highly-available key value store used as Kubernetes’ backing store for all cluster data. (the cluster database) and the API ServerControl plane component that serves the Kubernetes API. (which the kubectlA command line tool for communicating with a Kubernetes API server. command line tool communicates with).

  1. (Recommended) If you have plans to upgrade this single control-plane kubeadm cluster to high availability you should specify the --control-plane-endpoint to set the shared endpoint for all control-plane nodes. Such an endpoint can be either a DNS name or an IP address of a load-balancer.
  2. Choose a Pod network add-on, and verify whether it requires any arguments to be passed to kubeadm init. Depending on which third-party provider you choose, you might need to set the --pod-network-cidr to a provider-specific value. See Installing a Pod network add-on.
  3. (Optional) Since version 1.14, kubeadm tries to detect the container runtime on Linux by using a list of well known domain socket paths. To use different container runtime or if there are more than one installed on the provisioned node, specify the --cri-socket argument to kubeadm init. See Installing runtime.
  4. (Optional) Unless otherwise specified, kubeadm uses the network interface associated with the default gateway to set the advertise address for this particular control-plane node’s API server. To use a different network interface, specify the --apiserver-advertise-address=<ip-address> argument to kubeadm init. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you must specify an IPv6 address, for example --apiserver-advertise-address=fd00::101
  5. (Optional) Run kubeadm config images pull prior to kubeadm init to verify connectivity to the gcr.io container image registry.

To initialize the control-plane node run:

  1. kubeadm init <args>

Considerations about apiserver-advertise-address and ControlPlaneEndpoint

While --apiserver-advertise-address can be used to set the advertise address for this particular control-plane node’s API server, --control-plane-endpoint can be used to set the shared endpoint for all control-plane nodes.

--control-plane-endpoint allows IP addresses but also DNS names that can map to IP addresses. Please contact your network administrator to evaluate possible solutions with respect to such mapping.

Here is an example mapping:

  1. 192.168.0.102 cluster-endpoint

Where 192.168.0.102 is the IP address of this node and cluster-endpoint is a custom DNS name that maps to this IP. This will allow you to pass --control-plane-endpoint=cluster-endpoint to kubeadm init and pass the same DNS name to kubeadm join. Later you can modify cluster-endpoint to point to the address of your load-balancer in an high availability scenario.

Turning a single control plane cluster created without --control-plane-endpoint into a highly available cluster is not supported by kubeadm.

More information

For more information about kubeadm init arguments, see the kubeadm reference guide.

For a complete list of configuration options, see the configuration file documentation.

To customize control plane components, including optional IPv6 assignment to liveness probe for control plane components and etcd server, provide extra arguments to each component as documented in custom arguments.

To run kubeadm init again, you must first tear down the cluster.

If you join a node with a different architecture to your cluster, make sure that your deployed DaemonSets have container image support for this architecture.

kubeadm init first runs a series of prechecks to ensure that the machine is ready to run Kubernetes. These prechecks expose warnings and exit on errors. kubeadm init then downloads and installs the cluster control plane components. This may take several minutes. The output should look like:

  1. [init] Using Kubernetes version: vX.Y.Z
  2. [preflight] Running pre-flight checks
  3. [preflight] Pulling images required for setting up a Kubernetes cluster
  4. [preflight] This might take a minute or two, depending on the speed of your internet connection
  5. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  6. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  7. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  8. [kubelet-start] Activating the kubelet service
  9. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  10. [certs] Generating "etcd/ca" certificate and key
  11. [certs] Generating "etcd/server" certificate and key
  12. [certs] etcd/server serving cert is signed for DNS names [kubeadm-cp localhost] and IPs [10.138.0.4 127.0.0.1 ::1]
  13. [certs] Generating "etcd/healthcheck-client" certificate and key
  14. [certs] Generating "etcd/peer" certificate and key
  15. [certs] etcd/peer serving cert is signed for DNS names [kubeadm-cp localhost] and IPs [10.138.0.4 127.0.0.1 ::1]
  16. [certs] Generating "apiserver-etcd-client" certificate and key
  17. [certs] Generating "ca" certificate and key
  18. [certs] Generating "apiserver" certificate and key
  19. [certs] apiserver serving cert is signed for DNS names [kubeadm-cp kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4]
  20. [certs] Generating "apiserver-kubelet-client" certificate and key
  21. [certs] Generating "front-proxy-ca" certificate and key
  22. [certs] Generating "front-proxy-client" certificate and key
  23. [certs] Generating "sa" key and public key
  24. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  25. [kubeconfig] Writing "admin.conf" kubeconfig file
  26. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  27. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  28. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  29. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  30. [control-plane] Creating static Pod manifest for "kube-apiserver"
  31. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  32. [control-plane] Creating static Pod manifest for "kube-scheduler"
  33. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  34. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  35. [apiclient] All control plane components are healthy after 31.501735 seconds
  36. [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  37. [kubelet] Creating a ConfigMap "kubelet-config-X.Y" in namespace kube-system with the configuration for the kubelets in the cluster
  38. [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kubeadm-cp" as an annotation
  39. [mark-control-plane] Marking the node kubeadm-cp as control-plane by adding the label "node-role.kubernetes.io/master=''"
  40. [mark-control-plane] Marking the node kubeadm-cp as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  41. [bootstrap-token] Using token: <token>
  42. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  43. [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  44. [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  45. [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  46. [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
  47. [addons] Applied essential addon: CoreDNS
  48. [addons] Applied essential addon: kube-proxy
  49. Your Kubernetes control-plane has initialized successfully!
  50. To start using your cluster, you need to run the following as a regular user:
  51. mkdir -p $HOME/.kube
  52. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  53. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  54. You should now deploy a Pod network to the cluster.
  55. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  56. /docs/concepts/cluster-administration/addons/
  57. You can now join any number of machines by running the following on each node
  58. as root:
  59. kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:

  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  1. export KUBECONFIG=/etc/kubernetes/admin.conf

Make a record of the kubeadm join command that kubeadm init outputs. You need this command to join nodes to your cluster.

The token is used for mutual authentication between the control-plane node and the joining nodes. The token included here is secret. Keep it safe, because anyone with this token can add authenticated nodes to your cluster. These tokens can be listed, created, and deleted with the kubeadm token command. See the kubeadm reference guide.

Installing a Pod network add-on

Caution:

This section contains important information about networking setup and deployment order. Read all of this advice carefully before proceeding.

You must deploy a Container Network InterfaceContainer network interface (CNI) plugins are a type of Network plugin that adheres to the appc/CNI specification. (CNI) based Pod network add-on so that your Pods can communicate with each other.
Cluster DNS (CoreDNS) will not start up before a network is installed.

  • Take care that your Pod network must not overlap with any of the host networks: you are likely to see problems if there is any overlap.
    (If you find a collision between your network plugin’s preferred Pod network and some of your host networks, you should think of a suitable CIDR block to use instead, then use that during kubeadm init with --pod-network-cidr and as a replacement in your network plugin’s YAML).

  • By default, kubeadm sets up your cluster to use and enforce use of RBAC (role based access control).
    Make sure that your Pod network plugin supports RBAC, and so do any manifests that you use to deploy it.

  • If you want to use IPv6—either dual-stack, or single-stack IPv6 only networking—for your cluster, make sure that your Pod network plugin supports IPv6.
    IPv6 support was added to CNI in v0.6.0.

Note: Currently Calico is the only CNI plugin that the kubeadm project performs e2e tests against. If you find an issue related to a CNI plugin you should log a ticket in its respective issue tracker instead of the kubeadm or kubernetes issue trackers.

Several external projects provide Kubernetes Pod networks using CNI, some of which also support Network Policy.

See the list of available networking and network policy add-ons.

You can install a Pod network add-on with the following command on the control-plane node or a node that has the kubeconfig credentials:

  1. kubectl apply -f <add-on.yaml>

You can install only one Pod network per cluster. Below you can find installation instructions for some popular Pod network plugins:

Calico is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer. Calico works on several architectures, including amd64, arm64, and ppc64le.

By default, Calico uses 192.168.0.0/16 as the Pod network CIDR, though this can be configured in the calico.yaml file. For Calico to work correctly, you need to pass this same CIDR to the kubeadm init command using the --pod-network-cidr=192.168.0.0/16 flag or via kubeadm’s configuration.

  1. kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml

For Cilium to work correctly, you must pass --pod-network-cidr=10.217.0.0/16 to kubeadm init.

To deploy Cilium you just need to run:

  1. kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.6/install/kubernetes/quick-install.yaml

Once all Cilium Pods are marked as READY, you start using your cluster.

  1. kubectl get pods -n kube-system --selector=k8s-app=cilium

The output is similar to this:

  1. NAME READY STATUS RESTARTS AGE
  2. cilium-drxkl 1/1 Running 0 18m

Cilium can be used as a replacement for kube-proxy, see Kubernetes without kube-proxy.

For more information about using Cilium with Kubernetes, see Kubernetes Install guide for Cilium.

Contiv-VPP employs a programmable CNF vSwitch based on FD.io VPP, offering feature-rich & high-performance cloud-native networking and services.

It implements k8s services and network policies in the user space (on VPP).

Please refer to this installation guide: Contiv-VPP Manual Installation

Kube-router relies on kube-controller-manager to allocate Pod CIDR for the nodes. Therefore, use kubeadm init with the --pod-network-cidr flag.

Kube-router provides Pod networking, network policy, and high-performing IP Virtual Server(IPVS)/Linux Virtual Server(LVS) based service proxy.

For information on using the kubeadm tool to set up a Kubernetes cluster with Kube-router, please see the official setup guide.

For more information on setting up your Kubernetes cluster with Weave Net, please see Integrating Kubernetes via the Addon.

Weave Net works on amd64, arm, arm64 and ppc64le platforms without any extra action required. Weave Net sets hairpin mode by default. This allows Pods to access themselves via their Service IP address if they don’t know their PodIP.

  1. kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Once a Pod network has been installed, you can confirm that it is working by checking that the CoreDNS Pod is Running in the output of kubectl get pods --all-namespaces. And once the CoreDNS Pod is up and running, you can continue by joining your nodes.

If your network is not working or CoreDNS is not in the Running state, check out the troubleshooting guide for kubeadm.

Control plane node isolation

By default, your cluster will not schedule Pods on the control-plane node for security reasons. If you want to be able to schedule Pods on the control-plane node, for example for a single-machine Kubernetes cluster for development, run:

  1. kubectl taint nodes --all node-role.kubernetes.io/master-

With output looking something like:

  1. node "test-01" untainted
  2. taint "node-role.kubernetes.io/master:" not found
  3. taint "node-role.kubernetes.io/master:" not found

This will remove the node-role.kubernetes.io/master taint from any nodes that have it, including the control-plane node, meaning that the scheduler will then be able to schedule Pods everywhere.

Joining your nodes

The nodes are where your workloads (containers and Pods, etc) run. To add new nodes to your cluster do the following for each machine:

  • SSH to the machine
  • Become root (e.g. sudo su -)
  • Run the command that was output by kubeadm init. For example:

    1. kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>

If you do not have the token, you can get it by running the following command on the control-plane node:

  1. kubeadm token list

The output is similar to this:

  1. TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
  2. 8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:
  3. signing token generated by bootstrappers:
  4. 'kubeadm init'. kubeadm:
  5. default-node-token

By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the control-plane node:

  1. kubeadm token create

The output is similar to this:

  1. 5didvk.d09sbcov8ph2amjw

If you don’t have the value of --discovery-token-ca-cert-hash, you can get it by running the following command chain on the control-plane node:

  1. openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
  2. openssl dgst -sha256 -hex | sed 's/^.* //'

The output is similar to:

  1. 8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78

Note: To specify an IPv6 tuple for <control-plane-host>:<control-plane-port>, IPv6 address must be enclosed in square brackets, for example: [fd00::101]:2073.

The output should look something like:

  1. [preflight] Running pre-flight checks
  2. ... (log output of join workflow) ...
  3. Node join complete:
  4. * Certificate signing request sent to control-plane and response
  5. received.
  6. * Kubelet informed of new secure connection details.
  7. Run 'kubectl get nodes' on control-plane to see this machine join.

A few seconds later, you should notice this node in the output from kubectl get nodes when run on the control-plane node.

(Optional) Controlling your cluster from machines other than the control-plane node

In order to get a kubectl on some other computer (e.g. laptop) to talk to your cluster, you need to copy the administrator kubeconfig file from your control-plane node to your workstation like this:

  1. scp root@<control-plane-host>:/etc/kubernetes/admin.conf .
  2. kubectl --kubeconfig ./admin.conf get nodes

Note:

The example above assumes SSH access is enabled for root. If that is not the case, you can copy the admin.conf file to be accessible by some other user and scp using that other user instead.

The admin.conf file gives the user superuser privileges over the cluster. This file should be used sparingly. For normal users, it’s recommended to generate an unique credential to which you whitelist privileges. You can do this with the kubeadm alpha kubeconfig user --client-name <CN> command. That command will print out a KubeConfig file to STDOUT which you should save to a file and distribute to your user. After that, whitelist privileges by using kubectl create (cluster)rolebinding.

(Optional) Proxying API Server to localhost

If you want to connect to the API Server from outside the cluster you can use kubectl proxy:

  1. scp root@<control-plane-host>:/etc/kubernetes/admin.conf .
  2. kubectl --kubeconfig ./admin.conf proxy

You can now access the API Server locally at http://localhost:8001/api/v1

Clean up

If you used disposable servers for your cluster, for testing, you can switch those off and do no further clean up. You can use kubectl config delete-cluster to delete your local references to the cluster.

However, if you want to deprovision your cluster more cleanly, you should first drain the node and make sure that the node is empty, then deconfigure the node.

Remove the node

Talking to the control-plane node with the appropriate credentials, run:

  1. kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
  2. kubectl delete node <node name>

Then, on the node being removed, reset all kubeadm installed state:

  1. kubeadm reset

The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually:

  1. iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If you want to reset the IPVS tables, you must run the following command:

  1. ipvsadm -C

If you wish to start over simply run kubeadm init or kubeadm join with the appropriate arguments.

Clean up the control plane

You can use kubeadm reset on the control plane host to trigger a best-effort clean up.

See the kubeadm reset reference documentation for more information about this subcommand and its options.

What’s next

Feedback

Version skew policy

The kubeadm tool of version v1.18 may deploy clusters with a control plane of version v1.18 or v1.17. kubeadm v1.18 can also upgrade an existing kubeadm-created cluster of version v1.17.

Due to that we can’t see into the future, kubeadm CLI v1.18 may or may not be able to deploy v1.19 clusters.

These resources provide more information on supported version skew between kubelets and the control plane, and other Kubernetes components:

Limitations

Cluster resilience

The cluster created here has a single control-plane node, with a single etcd database running on it. This means that if the control-plane node fails, your cluster may lose data and may need to be recreated from scratch.

Workarounds:

  • Regularly back up etcd. The etcd data directory configured by kubeadm is at /var/lib/etcd on the control-plane node.

  • Use multiple control-plane nodes. You can read Options for Highly Available topology to pick a cluster topology that provides higher availabilty.

Platform compatibility

kubeadm deb/rpm packages and binaries are built for amd64, arm (32-bit), arm64, ppc64le, and s390x following the multi-platform proposal.

Multiplatform container images for the control plane and addons are also supported since v1.12.

Only some of the network providers offer solutions for all platforms. Please consult the list of network providers above or the documentation from each provider to figure out whether the provider supports your chosen platform.

Troubleshooting

If you are running into difficulties with kubeadm, please consult our troubleshooting docs.

Feedback

Was this page helpful?

Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement.