04.部署 etcd 集群

etcd 是基于 Raft 的分布式 key-value 存储系统,由 CoreOS 开发,常用于服务发现、共享配置以及并发控制(如 leader 选举、分布式锁等)。kubernetes 使用 etcd 存储所有运行数据。

本文档介绍部署一个三节点高可用 etcd 集群的步骤:

  • 下载和分发 etcd 二进制文件;
  • 创建 etcd 集群各节点的 x509 证书,用于加密客户端(如 etcdctl) 与 etcd 集群、etcd 集群之间的数据流;
  • 创建 etcd 的 systemd unit 文件,配置服务参数;
  • 检查集群工作状态;

etcd 集群各节点的名称和 IP 如下:

  • m7-demo-136001:172.27.136.1
  • m7-demo-136002:172.27.136.2
  • m7-demo-136003:172.27.136.3

下载和分发 etcd 二进制文件

https://github.com/coreos/etcd/releases 页面下载最新版本的发布包:

  1. cd /opt/k8s/work
  2. wget https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz
  3. tar -xvf etcd-v3.3.9-linux-amd64.tar.gz

分发二进制文件到集群所有节点:

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. for node_ip in ${NODE_IPS[@]}
  4. do
  5. echo ">>> ${node_ip}"
  6. scp etcd-v3.3.9-linux-amd64/etcd* root@${node_ip}:/opt/k8s/bin
  7. ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
  8. done

创建 etcd 证书和私钥

创建证书签名请求:

  1. cd /opt/k8s/work
  2. cat > etcd-csr.json <<EOF
  3. {
  4. "CN": "etcd",
  5. "hosts": [
  6. "127.0.0.1",
  7. "172.27.136.1",
  8. "172.27.136.2",
  9. "172.27.136.3"
  10. ],
  11. "key": {
  12. "algo": "rsa",
  13. "size": 2048
  14. },
  15. "names": [
  16. {
  17. "C": "CN",
  18. "ST": "BeiJing",
  19. "L": "BeiJing",
  20. "O": "k8s",
  21. "OU": "4Paradigm"
  22. }
  23. ]
  24. }
  25. EOF
  • hosts 字段指定授权使用该证书的 etcd 节点 IP 或域名列表,这里将 etcd 集群的三个节点 IP 都列在其中;

生成证书和私钥:

  1. cd /opt/k8s/work
  2. cfssl gencert -ca=/opt/k8s/work/ca.pem \
  3. -ca-key=/opt/k8s/work/ca-key.pem \
  4. -config=/opt/k8s/work/ca-config.json \
  5. -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
  6. ls etcd*pem

分发生成的证书和私钥到各 etcd 节点:

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. for node_ip in ${NODE_IPS[@]}
  4. do
  5. echo ">>> ${node_ip}"
  6. ssh root@${node_ip} "mkdir -p /etc/etcd/cert"
  7. scp etcd*.pem root@${node_ip}:/etc/etcd/cert/
  8. scp /opt/k8s/work/ca.pem root@${node_ip}:/etc/etcd/cert/
  9. done

创建 etcd 的 systemd unit 模板文件

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. cat > etcd.service.template <<EOF
  4. [Unit]
  5. Description=Etcd Server
  6. After=network.target
  7. After=network-online.target
  8. Wants=network-online.target
  9. Documentation=https://github.com/coreos
  10. [Service]
  11. Type=notify
  12. WorkingDirectory=${ETCD_DATA_DIR}
  13. ExecStart=/opt/k8s/bin/etcd \\
  14. --data-dir=${ETCD_DATA_DIR} \\
  15. --wal-dir=${ETCD_WAL_DIR} \\
  16. --name=##NODE_NAME## \\
  17. --cert-file=/etc/etcd/cert/etcd.pem \\
  18. --key-file=/etc/etcd/cert/etcd-key.pem \\
  19. --trusted-ca-file=/etc/etcd/cert/ca.pem \\
  20. --peer-cert-file=/etc/etcd/cert/etcd.pem \\
  21. --peer-key-file=/etc/etcd/cert/etcd-key.pem \\
  22. --peer-trusted-ca-file=/etc/etcd/cert/ca.pem \\
  23. --peer-client-cert-auth \\
  24. --client-cert-auth \\
  25. --listen-peer-urls=https://##NODE_IP##:2380 \\
  26. --initial-advertise-peer-urls=https://##NODE_IP##:2380 \\
  27. --listen-client-urls=https://##NODE_IP##:2379,http://127.0.0.1:2379 \\
  28. --advertise-client-urls=https://##NODE_IP##:2379 \\
  29. --initial-cluster-token=etcd-cluster-0 \\
  30. --initial-cluster=${ETCD_NODES} \\
  31. --initial-cluster-state=new \\
  32. --auto-compaction-mode=periodic \\
  33. --auto-compaction-retention=1 \\
  34. --max-request-bytes=33554432 \\
  35. --quota-backend-bytes=6442450944 \\
  36. --heartbeat-interval=250 \\
  37. --election-timeout=2000
  38. Restart=always
  39. RestartSec=5
  40. StartLimitInterval=0
  41. LimitNOFILE=65536
  42. [Install]
  43. WantedBy=multi-user.target
  44. EOF
  • WorkingDirectory--data-dir:指定工作目录和数据目录为 ${ETCD_DATA_DIR},需在启动服务前创建这个目录;
  • --wal-dir:指定 wal 目录,为了提高性能,一般使用 SSD 或者和 data-dir 不通的磁盘;
  • --name:指定节点名称,当 --initial-cluster-state 值为 new 时,--name 的参数值必须位于 --initial-cluster 列表中;
  • --cert-file--key-file:etcd server 与 client 通信时使用的证书和私钥;
  • --trusted-ca-file:签名 client 证书的 CA 证书,用于验证 client 证书;
  • --peer-cert-file--peer-key-file:etcd 与 peer 通信使用的证书和私钥;
  • --peer-trusted-ca-file:签名 peer 证书的 CA 证书,用于验证 peer 证书;
  • --quota-backend-bytes=6442450944:设置为最大 6GB,最大为 8GB;
  • --auto-compaction-retention=1:每 1 小时压缩一次,提高性能,节省磁盘空间;
  • --heartbeat-interva=250--election-timeout=2000:磁盘性能低的情况下,增加时间间隔和超时时间;

为各节点创建和分发 etcd systemd unit 文件

替换模板文件中的变量,为各节点创建 systemd unit 文件:

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. for (( i=0; i < 3; i++ ))
  4. do
  5. sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" etcd.service.template > etcd-${NODE_IPS[i]}.service
  6. done
  7. ls etcd-*.service
  • NODE_NAMES 和 NODE_IPS 为相同长度的 bash 数组,分别为节点名称和对应的 IP;

分发生成的 systemd unit 文件:

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. for node_ip in ${NODE_IPS[@]}
  4. do
  5. echo ">>> ${node_ip}"
  6. ssh root@${node_ip} "mkdir -p ${ETCD_DATA_DIR} ${ETCD_WAL_DIR}"
  7. scp etcd-${node_ip}.service root@${node_ip}:/etc/systemd/system/etcd.service
  8. done
  • 必须先创建 etcd 数据目录和工作目录;
  • 文件重命名为 etcd.service;

完整 unit 文件见:etcd.service

启动 etcd 服务

  1. source /opt/k8s/bin/environment.sh
  2. for node_ip in ${NODE_IPS[@]}
  3. do
  4. echo ">>> ${node_ip}"
  5. ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd"
  6. ssh root@${node_ip} "nohup systemctl restart etcd &>/dev/null &"
  7. done
  • etcd 进程首次启动时会等待其它节点的 etcd 加入集群,命令 systemctl start etcd 会卡住一段时间,为正常现象。

检查启动结果

  1. source /opt/k8s/bin/environment.sh
  2. for node_ip in ${NODE_IPS[@]}
  3. do
  4. echo ">>> ${node_ip}"
  5. ssh root@${node_ip} "systemctl status etcd|grep Active"
  6. done

确保状态为 active (running),否则查看日志,确认原因:

  1. $ journalctl -u etcd

验证服务状态

部署完 etcd 集群后,在任一 etc 节点上执行如下命令:

  1. source /opt/k8s/bin/environment.sh
  2. for node_ip in ${NODE_IPS[@]}
  3. do
  4. echo ">>> ${node_ip}"
  5. ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
  6. --endpoints=https://${node_ip}:2379 \
  7. --cacert=/opt/k8s/work/ca.pem \
  8. --cert=/etc/etcd/cert/etcd.pem \
  9. --key=/etc/etcd/cert/etcd-key.pem endpoint health
  10. done

预期输出:

  1. >>> 172.27.136.1
  2. https://172.27.136.1:2379 is healthy: successfully committed proposal: took = 1.752718ms
  3. >>> 172.27.136.2
  4. https://172.27.136.2:2379 is healthy: successfully committed proposal: took = 5.709898ms
  5. >>> 172.27.136.3
  6. https://172.27.136.3:2379 is healthy: successfully committed proposal: took = 2.057555ms

输出均为 healthy 时表示集群服务正常。

查看当前的 leader

  1. source /opt/k8s/bin/environment.sh
  2. ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
  3. -w table --cacert=/opt/k8s/work/ca.pem \
  4. --cert=/etc/etcd/cert/etcd.pem \
  5. --key=/etc/etcd/cert/etcd-key.pem \
  6. --endpoints=${ETCD_ENDPOINTS} endpoint status

输出:

  1. +----------------------------+------------------+---------+---------+-----------+-----------+------------+
  2. | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
  3. +----------------------------+------------------+---------+---------+-----------+-----------+------------+
  4. | https://172.27.136.1:2379 | f3ebc028ed75e1d5 | 3.3.8 | 133 MB | false | 1046 | 15855129 |
  5. | https://172.27.136.2:2379 | b6c6ce02ecf2d0e5 | 3.3.8 | 134 MB | false | 1046 | 15855129 |
  6. | https://172.27.136.3:2379 | fe52f375f370dc54 | 3.3.8 | 133 MB | true | 1046 | 15855130 |
  7. +----------------------------+------------------+---------+---------+-----------+-----------+------------+
  8. [root@m7-power-k8s01 ~]#
  • 可见,当前的 leader 为 172.27.136.3;