部署 Kubernetes 控制节点

本部分将会在三台控制节点上部署 Kubernetes 控制服务,并配置高可用的集群架构。并且还会创建一个用于外部访问的负载均衡器。每个控制节点上需要部署的服务包括:Kubernetes API Server、Scheduler 以及 Controller Manager 等。

事前准备

以下命令需要在每台控制节点上面都运行一遍,包括 controller-0controller-1controller-2。可以使用 gcloud 命令登录每个控制节点。例如:

  1. gcloud compute ssh controller-0

可以使用 tmux 同时登录到三点控制节点上,加快部署步骤。

部署 Kubernetes 控制平面

创建 Kubernetes 配置目录

  1. sudo mkdir -p /etc/kubernetes/config

下载并安装 Kubernetes Controller 二进制文件

  1. wget -q --show-progress --https-only --timestamping \
  2. "https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-apiserver" \
  3. "https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-controller-manager" \
  4. "https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-scheduler" \
  5. "https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl"
  6. chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
  7. sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/

配置 Kubernetes API Server

  1. sudo mkdir -p /var/lib/kubernetes/
  2. sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
  3. service-account-key.pem service-account.pem \
  4. encryption-config.yaml /var/lib/kubernetes/

使用节点的内网 IP 地址作为 API server 与集群内部成员的广播地址。首先查询当前节点的内网 IP 地址:

  1. INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
  2. http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)

生成 kube-apiserver.service systemd 配置文件:

  1. cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
  2. [Unit]
  3. Description=Kubernetes API Server
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. [Service]
  6. ExecStart=/usr/local/bin/kube-apiserver \\
  7. --advertise-address=${INTERNAL_IP} \\
  8. --allow-privileged=true \\
  9. --apiserver-count=3 \\
  10. --audit-log-maxage=30 \\
  11. --audit-log-maxbackup=3 \\
  12. --audit-log-maxsize=100 \\
  13. --audit-log-path=/var/log/audit.log \\
  14. --authorization-mode=Node,RBAC \\
  15. --bind-address=0.0.0.0 \\
  16. --client-ca-file=/var/lib/kubernetes/ca.pem \\
  17. --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
  18. --enable-swagger-ui=true \\
  19. --etcd-cafile=/var/lib/kubernetes/ca.pem \\
  20. --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
  21. --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
  22. --etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
  23. --event-ttl=1h \\
  24. --experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
  25. --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
  26. --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
  27. --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
  28. --kubelet-https=true \\
  29. --runtime-config=api/all \\
  30. --service-account-key-file=/var/lib/kubernetes/service-account.pem \\
  31. --service-cluster-ip-range=10.32.0.0/24 \\
  32. --service-node-port-range=30000-32767 \\
  33. --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
  34. --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
  35. --v=2
  36. Restart=on-failure
  37. RestartSec=5
  38. [Install]
  39. WantedBy=multi-user.target
  40. EOF

配置 Kubernetes Controller Manager

生成 kube-controller-manager.service systemd 配置文件:

  1. sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
  2. cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
  3. [Unit]
  4. Description=Kubernetes Controller Manager
  5. Documentation=https://github.com/kubernetes/kubernetes
  6. [Service]
  7. ExecStart=/usr/local/bin/kube-controller-manager \\
  8. --address=0.0.0.0 \\
  9. --cluster-cidr=10.200.0.0/16 \\
  10. --cluster-name=kubernetes \\
  11. --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
  12. --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
  13. --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
  14. --leader-elect=true \\
  15. --root-ca-file=/var/lib/kubernetes/ca.pem \\
  16. --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
  17. --service-cluster-ip-range=10.32.0.0/24 \\
  18. --use-service-account-credentials=true \\
  19. --v=2
  20. Restart=on-failure
  21. RestartSec=5
  22. [Install]
  23. WantedBy=multi-user.target
  24. EOF

配置 Kubernetes Scheduler

生成 kube-scheduler.service systemd 配置文件:

  1. sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
  2. cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
  3. apiVersion: componentconfig/v1alpha1
  4. kind: KubeSchedulerConfiguration
  5. clientConnection:
  6. kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
  7. leaderElection:
  8. leaderElect: true
  9. EOF
  10. cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
  11. [Unit]
  12. Description=Kubernetes Scheduler
  13. Documentation=https://github.com/kubernetes/kubernetes
  14. [Service]
  15. ExecStart=/usr/local/bin/kube-scheduler \\
  16. --config=/etc/kubernetes/config/kube-scheduler.yaml \\
  17. --v=2
  18. Restart=on-failure
  19. RestartSec=5
  20. [Install]
  21. WantedBy=multi-user.target
  22. EOF

启动控制器服务

  1. sudo systemctl daemon-reload
  2. sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
  3. sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler

请等待 10 秒以便 Kubernetes API Server 初始化。

开启 HTTP 健康检查

Google Network Load Balancer 将用在在三个 API Server 之前作负载均衡,并可以终止 TLS 并验证客户端证书。但是该负载均衡仅支持 HTTP 健康检查,因而这里部署 nginx 来代理 API Server 的 /healthz 连接。

/healthz API 默认不需要认证。

  1. sudo apt-get update
  2. sudo apt-get install -y nginx
  3. cat > kubernetes.default.svc.cluster.local <<EOF
  4. server {
  5. listen 80;
  6. server_name kubernetes.default.svc.cluster.local;
  7. location /healthz {
  8. proxy_pass https://127.0.0.1:6443/healthz;
  9. proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
  10. }
  11. }
  12. EOF
  13. sudo mv kubernetes.default.svc.cluster.local \
  14. /etc/nginx/sites-available/kubernetes.default.svc.cluster.local
  15. sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
  16. sudo systemctl restart nginx
  17. sudo systemctl enable nginx

验证

  1. kubectl get componentstatuses --kubeconfig admin.kubeconfig

将输出结果

  1. NAME STATUS MESSAGE ERROR
  2. controller-manager Healthy ok
  3. scheduler Healthy ok
  4. etcd-2 Healthy {"health": "true"}
  5. etcd-0 Healthy {"health": "true"}
  6. etcd-1 Healthy {"health": "true"}

验证 Nginx HTTP 健康检查

  1. curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz

将输出

  1. HTTP/1.1 200 OK
  2. Server: nginx/1.14.0 (Ubuntu)
  3. Date: Mon, 14 May 2018 13:45:39 GMT
  4. Content-Type: text/plain; charset=utf-8
  5. Content-Length: 2
  6. Connection: keep-alive
  7. ok

记得在每台控制节点上面都运行一遍,包括 controller-0controller-1controller-2

Kubelet RBAC 授权

本节将会配置 API Server 访问 Kubelet API 的 RBAC 授权。访问 Kubelet API 是获取 metrics、日志以及执行容器命令所必需的。

这里设置 Kubeket --authorization-modeWebhook 模式。Webhook 模式使用 SubjectAccessReview API 来决定授权。

  1. gcloud compute ssh controller-0

创建 system:kube-apiserver-to-kubelet ClusterRole 以允许请求 Kubelet API 和执行许用来管理 Pods 的任务:

  1. cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
  2. apiVersion: rbac.authorization.k8s.io/v1beta1
  3. kind: ClusterRole
  4. metadata:
  5. annotations:
  6. rbac.authorization.kubernetes.io/autoupdate: "true"
  7. labels:
  8. kubernetes.io/bootstrapping: rbac-defaults
  9. name: system:kube-apiserver-to-kubelet
  10. rules:
  11. - apiGroups:
  12. - ""
  13. resources:
  14. - nodes/proxy
  15. - nodes/stats
  16. - nodes/log
  17. - nodes/spec
  18. - nodes/metrics
  19. verbs:
  20. - "*"
  21. EOF

Kubernetes API Server 使用客户端凭证授权 Kubelet 为 kubernetes 用户,此凭证用 --kubelet-client-certificate flag 来定义。

绑定 system:kube-apiserver-to-kubelet ClusterRole 到 kubernetes 用户:

  1. cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
  2. apiVersion: rbac.authorization.k8s.io/v1beta1
  3. kind: ClusterRoleBinding
  4. metadata:
  5. name: system:kube-apiserver
  6. namespace: ""
  7. roleRef:
  8. apiGroup: rbac.authorization.k8s.io
  9. kind: ClusterRole
  10. name: system:kube-apiserver-to-kubelet
  11. subjects:
  12. - apiGroup: rbac.authorization.k8s.io
  13. kind: User
  14. name: kubernetes
  15. EOF

Kubernetes 前端负载均衡器

本节将会建立一个位于 Kubernetes API Servers 前端的外部负载均衡器。 kubernetes-the-hard-way 静态 IP 地址将会配置在这个负载均衡器上。

本指南创建的虚拟机内部并没有操作负载均衡器的权限,需要到创建这些虚拟机的那台机器上去做下面的操作。

创建外部负载均衡器网络资源:

  1. KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
  2. --region $(gcloud config get-value compute/region) \
  3. --format 'value(address)')
  4. gcloud compute http-health-checks create kubernetes \
  5. --description "Kubernetes Health Check" \
  6. --host "kubernetes.default.svc.cluster.local" \
  7. --request-path "/healthz"
  8. gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
  9. --network kubernetes-the-hard-way \
  10. --source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \
  11. --allow tcp
  12. gcloud compute target-pools create kubernetes-target-pool \
  13. --http-health-check kubernetes
  14. gcloud compute target-pools add-instances kubernetes-target-pool \
  15. --instances controller-0,controller-1,controller-2
  16. gcloud compute forwarding-rules create kubernetes-forwarding-rule \
  17. --address ${KUBERNETES_PUBLIC_ADDRESS} \
  18. --ports 6443 \
  19. --region $(gcloud config get-value compute/region) \
  20. --target-pool kubernetes-target-pool

验证

查询 kubernetes-the-hard-way 静态 IP 地址:

  1. KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
  2. --region $(gcloud config get-value compute/region) \
  3. --format 'value(address)')

发送一个查询 Kubernetes 版本信息的 HTTP 请求

  1. curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version

结果为

  1. {
  2. "major": "1",
  3. "minor": "12",
  4. "gitVersion": "v1.12.0",
  5. "gitCommit": "0ed33881dc4355495f623c6f22e7dd0b7632b7c0",
  6. "gitTreeState": "clean",
  7. "buildDate": "2018-09-27T16:55:41Z",
  8. "goVersion": "go1.10.4",
  9. "compiler": "gc",
  10. "platform": "linux/amd64"
  11. }

下一步:部署 Kubernetes Worker 节点