Set up a High Availability etcd Cluster with kubeadm

Note: While kubeadm is being used as the management tool for external etcd nodes in this guide, please note that kubeadm does not plan to support certificate rotation or upgrades for such nodes. The long-term plan is to empower the tool etcdadm to manage these aspects.

By default, kubeadm runs a local etcd instance on each control plane node. It is also possible to treat the etcd cluster as external and provision etcd instances on separate hosts. The differences between the two approaches are covered in the Options for Highly Available topology page.

This task walks through the process of creating a high availability external etcd cluster of three members that can be used by kubeadm during cluster creation.

Before you begin

  • Three hosts that can talk to each other over TCP ports 2379 and 2380. This document assumes these default ports. However, they are configurable through the kubeadm config file.
  • Each host must have systemd and a bash compatible shell installed.
  • Each host must have a container runtime, kubelet, and kubeadm installed.
  • Each host should have access to the Kubernetes container image registry (registry.k8s.io) or list/pull the required etcd image using kubeadm config images list/pull. This guide will set up etcd instances as static pods managed by a kubelet.
  • Some infrastructure to copy files between hosts. For example ssh and scp can satisfy this requirement.

Setting up the cluster

The general approach is to generate all certs on one node and only distribute the necessary files to the other nodes.

Note: kubeadm contains all the necessary cryptographic machinery to generate the certificates described below; no other cryptographic tooling is required for this example.

Note: The examples below use IPv4 addresses but you can also configure kubeadm, the kubelet and etcd to use IPv6 addresses. Dual-stack is supported by some Kubernetes options, but not by etcd. For more details on Kubernetes dual-stack support see Dual-stack support with kubeadm.

  1. Configure the kubelet to be a service manager for etcd.

    Note: You must do this on every host where etcd should be running.

    Since etcd was created first, you must override the service priority by creating a new unit file that has higher precedence than the kubeadm-provided kubelet unit file.

    1. cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
    2. [Service]
    3. ExecStart=
    4. # Replace "systemd" with the cgroup driver of your container runtime. The default value in the kubelet is "cgroupfs".
    5. # Replace the value of "--container-runtime-endpoint" for a different container runtime if needed.
    6. ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock
    7. Restart=always
    8. EOF
    9. systemctl daemon-reload
    10. systemctl restart kubelet

    Check the kubelet status to ensure it is running.

    1. systemctl status kubelet
  2. Create configuration files for kubeadm.

    Generate one kubeadm configuration file for each host that will have an etcd member running on it using the following script.

    1. # Update HOST0, HOST1 and HOST2 with the IPs of your hosts
    2. export HOST0=10.0.0.6
    3. export HOST1=10.0.0.7
    4. export HOST2=10.0.0.8
    5. # Update NAME0, NAME1 and NAME2 with the hostnames of your hosts
    6. export NAME0="infra0"
    7. export NAME1="infra1"
    8. export NAME2="infra2"
    9. # Create temp directories to store files that will end up on other hosts
    10. mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/
    11. HOSTS=(${HOST0} ${HOST1} ${HOST2})
    12. NAMES=(${NAME0} ${NAME1} ${NAME2})
    13. for i in "${!HOSTS[@]}"; do
    14. HOST=${HOSTS[$i]}
    15. NAME=${NAMES[$i]}
    16. cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
    17. ---
    18. apiVersion: "kubeadm.k8s.io/v1beta3"
    19. kind: InitConfiguration
    20. nodeRegistration:
    21. name: ${NAME}
    22. localAPIEndpoint:
    23. advertiseAddress: ${HOST}
    24. ---
    25. apiVersion: "kubeadm.k8s.io/v1beta3"
    26. kind: ClusterConfiguration
    27. etcd:
    28. local:
    29. serverCertSANs:
    30. - "${HOST}"
    31. peerCertSANs:
    32. - "${HOST}"
    33. extraArgs:
    34. initial-cluster: ${NAMES[0]}=https://${HOSTS[0]}:2380,${NAMES[1]}=https://${HOSTS[1]}:2380,${NAMES[2]}=https://${HOSTS[2]}:2380
    35. initial-cluster-state: new
    36. name: ${NAME}
    37. listen-peer-urls: https://${HOST}:2380
    38. listen-client-urls: https://${HOST}:2379
    39. advertise-client-urls: https://${HOST}:2379
    40. initial-advertise-peer-urls: https://${HOST}:2380
    41. EOF
    42. done
  3. Generate the certificate authority.

    If you already have a CA then the only action that is copying the CA’s crt and key file to /etc/kubernetes/pki/etcd/ca.crt and /etc/kubernetes/pki/etcd/ca.key. After those files have been copied, proceed to the next step, “Create certificates for each member”.

    If you do not already have a CA then run this command on $HOST0 (where you generated the configuration files for kubeadm).

    1. kubeadm init phase certs etcd-ca

    This creates two files:

    • /etc/kubernetes/pki/etcd/ca.crt
    • /etc/kubernetes/pki/etcd/ca.key
  4. Create certificates for each member.

    1. kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml
    2. kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml
    3. kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
    4. kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
    5. cp -R /etc/kubernetes/pki /tmp/${HOST2}/
    6. # cleanup non-reusable certificates
    7. find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
    8. kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml
    9. kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
    10. kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
    11. kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
    12. cp -R /etc/kubernetes/pki /tmp/${HOST1}/
    13. find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
    14. kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
    15. kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
    16. kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
    17. kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
    18. # No need to move the certs because they are for HOST0
    19. # clean up certs that should not be copied off this host
    20. find /tmp/${HOST2} -name ca.key -type f -delete
    21. find /tmp/${HOST1} -name ca.key -type f -delete
  5. Copy certificates and kubeadm configs.

    The certificates have been generated and now they must be moved to their respective hosts.

    1. USER=ubuntu
    2. HOST=${HOST1}
    3. scp -r /tmp/${HOST}/* ${USER}@${HOST}:
    4. ssh ${USER}@${HOST}
    5. USER@HOST $ sudo -Es
    6. root@HOST $ chown -R root:root pki
    7. root@HOST $ mv pki /etc/kubernetes/
  6. Ensure all expected files exist.

    The complete list of required files on $HOST0 is:

    1. /tmp/${HOST0}
    2. └── kubeadmcfg.yaml
    3. ---
    4. /etc/kubernetes/pki
    5. ├── apiserver-etcd-client.crt
    6. ├── apiserver-etcd-client.key
    7. └── etcd
    8. ├── ca.crt
    9. ├── ca.key
    10. ├── healthcheck-client.crt
    11. ├── healthcheck-client.key
    12. ├── peer.crt
    13. ├── peer.key
    14. ├── server.crt
    15. └── server.key

    On $HOST1:

    1. $HOME
    2. └── kubeadmcfg.yaml
    3. ---
    4. /etc/kubernetes/pki
    5. ├── apiserver-etcd-client.crt
    6. ├── apiserver-etcd-client.key
    7. └── etcd
    8. ├── ca.crt
    9. ├── healthcheck-client.crt
    10. ├── healthcheck-client.key
    11. ├── peer.crt
    12. ├── peer.key
    13. ├── server.crt
    14. └── server.key

    On $HOST2:

    1. $HOME
    2. └── kubeadmcfg.yaml
    3. ---
    4. /etc/kubernetes/pki
    5. ├── apiserver-etcd-client.crt
    6. ├── apiserver-etcd-client.key
    7. └── etcd
    8. ├── ca.crt
    9. ├── healthcheck-client.crt
    10. ├── healthcheck-client.key
    11. ├── peer.crt
    12. ├── peer.key
    13. ├── server.crt
    14. └── server.key
  7. Create the static pod manifests.

    Now that the certificates and configs are in place it’s time to create the manifests. On each host run the kubeadm command to generate a static manifest for etcd.

    1. root@HOST0 $ kubeadm init phase etcd local --config=/tmp/${HOST0}/kubeadmcfg.yaml
    2. root@HOST1 $ kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml
    3. root@HOST2 $ kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml
  8. Optional: Check the cluster health.

    If etcdctl isn’t available, you can run this tool inside a container image. You would do that directly with your container runtime using a tool such as crictl run and not through Kubernetes

    1. ETCDCTL_API=3 etcdctl \
    2. --cert /etc/kubernetes/pki/etcd/peer.crt \
    3. --key /etc/kubernetes/pki/etcd/peer.key \
    4. --cacert /etc/kubernetes/pki/etcd/ca.crt \
    5. --endpoints https://${HOST0}:2379 endpoint health
    6. ...
    7. https://[HOST0 IP]:2379 is healthy: successfully committed proposal: took = 16.283339ms
    8. https://[HOST1 IP]:2379 is healthy: successfully committed proposal: took = 19.44402ms
    9. https://[HOST2 IP]:2379 is healthy: successfully committed proposal: took = 35.926451ms
    • Set ${HOST0}to the IP address of the host you are testing.

What’s next

Once you have an etcd cluster with 3 working members, you can continue setting up a highly available control plane using the external etcd method with kubeadm.