Bootstrapping the etcd Cluster

Kubernetes components are stateless and store cluster state in etcd. In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.

Prerequisites

The commands in this lab must be run on each controller instance: controller-0, controller-1, and controller-2. Login to each controller instance using the gcloud command. Example:

  1. gcloud compute ssh controller-0

Bootstrapping an etcd Cluster Member

Download and Install the etcd Binaries

Download the official etcd release binaries from the coreos/etcd GitHub project:

  1. wget -q --show-progress --https-only --timestamping \
  2. "https://github.com/coreos/etcd/releases/download/v3.2.8/etcd-v3.2.8-linux-amd64.tar.gz"

Extract and install the etcd server and the etcdctl command line utility:

  1. tar -xvf etcd-v3.2.8-linux-amd64.tar.gz
  1. sudo mv etcd-v3.2.8-linux-amd64/etcd* /usr/local/bin/

Configure the etcd Server

  1. sudo mkdir -p /etc/etcd /var/lib/etcd
  1. sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/

The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:

  1. INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
  2. http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)

Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:

  1. ETCD_NAME=$(hostname -s)

Create the etcd.service systemd unit file:

  1. cat > etcd.service <<EOF
  2. [Unit]
  3. Description=etcd
  4. Documentation=https://github.com/coreos
  5. [Service]
  6. ExecStart=/usr/local/bin/etcd \\
  7. --name ${ETCD_NAME} \\
  8. --cert-file=/etc/etcd/kubernetes.pem \\
  9. --key-file=/etc/etcd/kubernetes-key.pem \\
  10. --peer-cert-file=/etc/etcd/kubernetes.pem \\
  11. --peer-key-file=/etc/etcd/kubernetes-key.pem \\
  12. --trusted-ca-file=/etc/etcd/ca.pem \\
  13. --peer-trusted-ca-file=/etc/etcd/ca.pem \\
  14. --peer-client-cert-auth \\
  15. --client-cert-auth \\
  16. --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
  17. --listen-peer-urls https://${INTERNAL_IP}:2380 \\
  18. --listen-client-urls https://${INTERNAL_IP}:2379,http://127.0.0.1:2379 \\
  19. --advertise-client-urls https://${INTERNAL_IP}:2379 \\
  20. --initial-cluster-token etcd-cluster-0 \\
  21. --initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\
  22. --initial-cluster-state new \\
  23. --data-dir=/var/lib/etcd
  24. Restart=on-failure
  25. RestartSec=5
  26. [Install]
  27. WantedBy=multi-user.target
  28. EOF

Start the etcd Server

  1. sudo mv etcd.service /etc/systemd/system/
  1. sudo systemctl daemon-reload
  1. sudo systemctl enable etcd
  1. sudo systemctl start etcd

Remember to run the above commands on each controller node: controller-0, controller-1, and controller-2.

Verification

List the etcd cluster members:

  1. ETCDCTL_API=3 etcdctl member list

output

  1. 3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379
  2. f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379
  3. ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379

Next: Bootstrapping the Kubernetes Control Plane