部署 Kubernetes Workers 节点

本部分将会部署三个 Kubernetes Worker 节点。每个节点上将会安装以下服务:runc, gVisor, container networking plugins, containerd, kubelet, 和 kube-proxy

事前准备

以下命令需要在所有 worker 节点上面都运行一遍,包括 worker-0, worker-1worker-2。可以使用 gcloud 命令登录到 worker 节点上,比如

  1. gcloud compute ssh worker-0

可以使用 tmux 同时登录到三个 Worker 节点上,加快部署步骤。

部署 Kubernetes Worker 节点

安装 OS 依赖组件:

  1. sudo apt-get update
  2. sudo apt-get -y install socat conntrack ipset

socat 命令用于支持 kubectl port-forward 命令。

下载并安装 worker 二进制文件

  1. wget -q --show-progress --https-only --timestamping \
  2. https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.12.0/crictl-v1.12.0-linux-amd64.tar.gz \
  3. https://storage.googleapis.com/kubernetes-the-hard-way/runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 \
  4. https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64 \
  5. https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
  6. https://github.com/containerd/containerd/releases/download/v1.2.0-rc.0/containerd-1.2.0-rc.0.linux-amd64.tar.gz \
  7. https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl \
  8. https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-proxy \
  9. https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubelet

创建安装目录:

  1. sudo mkdir -p \
  2. /etc/cni/net.d \
  3. /opt/cni/bin \
  4. /var/lib/kubelet \
  5. /var/lib/kube-proxy \
  6. /var/lib/kubernetes \
  7. /var/run/kubernetes

安装 worker 二进制文件

  1. sudo mv runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 runsc
  2. sudo mv runc.amd64 runc
  3. chmod +x kubectl kube-proxy kubelet runc runsc
  4. sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/
  5. sudo tar -xvf crictl-v1.12.0-linux-amd64.tar.gz -C /usr/local/bin/
  6. sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
  7. sudo tar -xvf containerd-1.2.0-rc.0.linux-amd64.tar.gz -C /

配置 CNI 网路

查询当前计算节点的 Pod CIDR 范围:

  1. POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
  2. http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)

生成 bridge 网络插件配置文件

  1. cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
  2. {
  3. "cniVersion": "0.3.1",
  4. "name": "bridge",
  5. "type": "bridge",
  6. "bridge": "cnio0",
  7. "isGateway": true,
  8. "ipMasq": true,
  9. "ipam": {
  10. "type": "host-local",
  11. "ranges": [
  12. [{"subnet": "${POD_CIDR}"}]
  13. ],
  14. "routes": [{"dst": "0.0.0.0/0"}]
  15. }
  16. }
  17. EOF

生成 loopback 网络插件配置文件

  1. cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
  2. {
  3. "cniVersion": "0.3.1",
  4. "type": "loopback"
  5. }
  6. EOF

配置 containerd

  1. sudo mkdir -p /etc/containerd/
  2. # Untrusted workloads will be run using the gVisor (runsc) runtime.
  3. cat << EOF | sudo tee /etc/containerd/config.toml
  4. [plugins]
  5. [plugins.cri.containerd]
  6. snapshotter = "overlayfs"
  7. [plugins.cri.containerd.default_runtime]
  8. runtime_type = "io.containerd.runtime.v1.linux"
  9. runtime_engine = "/usr/local/bin/runc"
  10. runtime_root = ""
  11. [plugins.cri.containerd.untrusted_workload_runtime]
  12. runtime_type = "io.containerd.runtime.v1.linux"
  13. runtime_engine = "/usr/local/bin/runsc"
  14. runtime_root = "/run/containerd/runsc"
  15. [plugins.cri.containerd.gvisor]
  16. runtime_type = "io.containerd.runtime.v1.linux"
  17. runtime_engine = "/usr/local/bin/runsc"
  18. runtime_root = "/run/containerd/runsc"
  19. EOF
  20. # Create the containerd.service systemd unit file
  21. cat <<EOF | sudo tee /etc/systemd/system/containerd.service
  22. [Unit]
  23. Description=containerd container runtime
  24. Documentation=https://containerd.io
  25. After=network.target
  26. [Service]
  27. ExecStartPre=/sbin/modprobe overlay
  28. ExecStart=/bin/containerd
  29. Restart=always
  30. RestartSec=5
  31. Delegate=yes
  32. KillMode=process
  33. OOMScoreAdjust=-999
  34. LimitNOFILE=1048576
  35. LimitNPROC=infinity
  36. LimitCORE=infinity
  37. [Install]
  38. WantedBy=multi-user.target
  39. EOF

配置 Kubelet

  1. sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
  2. sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
  3. sudo mv ca.pem /var/lib/kubernetes/

生成 kubelet.service systemd 配置文件:

  1. # The resolvConf configuration is used to avoid loops
  2. # when using CoreDNS for service discovery on systems running systemd-resolved.
  3. cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
  4. kind: KubeletConfiguration
  5. apiVersion: kubelet.config.k8s.io/v1beta1
  6. authentication:
  7. anonymous:
  8. enabled: false
  9. webhook:
  10. enabled: true
  11. x509:
  12. clientCAFile: "/var/lib/kubernetes/ca.pem"
  13. authorization:
  14. mode: Webhook
  15. clusterDomain: "cluster.local"
  16. clusterDNS:
  17. - "10.32.0.10"
  18. podCIDR: "${POD_CIDR}"
  19. resolvConf: "/run/systemd/resolve/resolv.conf"
  20. runtimeRequestTimeout: "15m"
  21. tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
  22. tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
  23. EOF
  24. cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
  25. [Unit]
  26. Description=Kubernetes Kubelet
  27. Documentation=https://github.com/kubernetes/kubernetes
  28. After=containerd.service
  29. Requires=containerd.service
  30. [Service]
  31. ExecStart=/usr/local/bin/kubelet \\
  32. --config=/var/lib/kubelet/kubelet-config.yaml \\
  33. --container-runtime=remote \\
  34. --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
  35. --image-pull-progress-deadline=2m \\
  36. --kubeconfig=/var/lib/kubelet/kubeconfig \\
  37. --network-plugin=cni \\
  38. --register-node=true \\
  39. --v=2
  40. Restart=on-failure
  41. RestartSec=5
  42. [Install]
  43. WantedBy=multi-user.target
  44. EOF

配置 Kube-Proxy

  1. sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig

生成 kube-proxy.service systemd 配置文件:

  1. cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
  2. kind: KubeProxyConfiguration
  3. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  4. clientConnection:
  5. kubeconfig: "/var/lib/kube-proxy/kubeconfig"
  6. mode: "iptables"
  7. clusterCIDR: "10.200.0.0/16"
  8. EOF
  9. cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
  10. [Unit]
  11. Description=Kubernetes Kube Proxy
  12. Documentation=https://github.com/kubernetes/kubernetes
  13. [Service]
  14. ExecStart=/usr/local/bin/kube-proxy \\
  15. --config=/var/lib/kube-proxy/kube-proxy-config.yaml
  16. Restart=on-failure
  17. RestartSec=5
  18. [Install]
  19. WantedBy=multi-user.target
  20. EOF

启动 worker 服务

  1. sudo systemctl daemon-reload
  2. sudo systemctl enable containerd kubelet kube-proxy
  3. sudo systemctl start containerd kubelet kube-proxy

记得在所有 worker 节点上面都运行一遍,包括 worker-0, worker-1worker-2

验证

登入任意一台控制节点查询 Nodes 列表

  1. gcloud compute ssh controller-0 \
  2. --command "kubectl get nodes --kubeconfig admin.kubeconfig"

输出为

  1. NAME STATUS ROLES AGE VERSION
  2. worker-0 Ready <none> 35s v1.12.0
  3. worker-1 Ready <none> 36s v1.12.0
  4. worker-2 Ready <none> 36s v1.12.0

下一步:配置 Kubectl