Bootstrapping the etcd Cluster

Kubernetes components are stateless and store cluster state in etcd. In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.

Prerequisites

The commands in this lab must be run on each controller instance: controller-0, controller-1, and controller-2. Login to each controller instance using the gcloud command. Example:

  1. gcloud compute ssh controller-0

Running commands in parallel with tmux

tmux can be used to run commands on multiple compute instances at the same time. See the Running commands in parallel with tmux section in the Prerequisites lab.

Bootstrapping an etcd Cluster Member

Download and Install the etcd Binaries

Download the official etcd release binaries from the etcd GitHub project:

  1. wget -q --show-progress --https-only --timestamping \
  2. "https://github.com/etcd-io/etcd/releases/download/v3.4.0/etcd-v3.4.0-linux-amd64.tar.gz"

Extract and install the etcd server and the etcdctl command line utility:

  1. {
  2. tar -xvf etcd-v3.4.0-linux-amd64.tar.gz
  3. sudo mv etcd-v3.4.0-linux-amd64/etcd* /usr/local/bin/
  4. }

Configure the etcd Server

  1. {
  2. sudo mkdir -p /etc/etcd /var/lib/etcd
  3. sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
  4. }

The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:

  1. INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
  2. http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)

Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:

  1. ETCD_NAME=$(hostname -s)

Create the etcd.service systemd unit file:

  1. cat <<EOF | sudo tee /etc/systemd/system/etcd.service
  2. [Unit]
  3. Description=etcd
  4. Documentation=https://github.com/coreos
  5. [Service]
  6. Type=notify
  7. ExecStart=/usr/local/bin/etcd \\
  8. --name ${ETCD_NAME} \\
  9. --cert-file=/etc/etcd/kubernetes.pem \\
  10. --key-file=/etc/etcd/kubernetes-key.pem \\
  11. --peer-cert-file=/etc/etcd/kubernetes.pem \\
  12. --peer-key-file=/etc/etcd/kubernetes-key.pem \\
  13. --trusted-ca-file=/etc/etcd/ca.pem \\
  14. --peer-trusted-ca-file=/etc/etcd/ca.pem \\
  15. --peer-client-cert-auth \\
  16. --client-cert-auth \\
  17. --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
  18. --listen-peer-urls https://${INTERNAL_IP}:2380 \\
  19. --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
  20. --advertise-client-urls https://${INTERNAL_IP}:2379 \\
  21. --initial-cluster-token etcd-cluster-0 \\
  22. --initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\
  23. --initial-cluster-state new \\
  24. --data-dir=/var/lib/etcd
  25. Restart=on-failure
  26. RestartSec=5
  27. [Install]
  28. WantedBy=multi-user.target
  29. EOF

Start the etcd Server

  1. {
  2. sudo systemctl daemon-reload
  3. sudo systemctl enable etcd
  4. sudo systemctl start etcd
  5. }

Remember to run the above commands on each controller node: controller-0, controller-1, and controller-2.

Verification

List the etcd cluster members:

  1. sudo ETCDCTL_API=3 etcdctl member list \
  2. --endpoints=https://127.0.0.1:2379 \
  3. --cacert=/etc/etcd/ca.pem \
  4. --cert=/etc/etcd/kubernetes.pem \
  5. --key=/etc/etcd/kubernetes-key.pem

output

  1. 3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379
  2. f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379
  3. ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379

Next: Bootstrapping the Kubernetes Control Plane