05-2. 部署 kube-apiserver 集群

本文档讲解部署一个三实例 kube-apiserver 集群的步骤.

注意:如果没有特殊指明,本文档的所有操作均在 zhangjun-k8s-01 节点上执行

创建 kubernetes-master 证书和私钥

创建证书签名请求:

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. cat > kubernetes-csr.json <<EOF
  4. {
  5. "CN": "kubernetes-master",
  6. "hosts": [
  7. "127.0.0.1",
  8. "172.27.138.251",
  9. "172.27.137.229",
  10. "172.27.138.239",
  11. "${CLUSTER_KUBERNETES_SVC_IP}",
  12. "kubernetes",
  13. "kubernetes.default",
  14. "kubernetes.default.svc",
  15. "kubernetes.default.svc.cluster",
  16. "kubernetes.default.svc.cluster.local.",
  17. "kubernetes.default.svc.${CLUSTER_DNS_DOMAIN}."
  18. ],
  19. "key": {
  20. "algo": "rsa",
  21. "size": 2048
  22. },
  23. "names": [
  24. {
  25. "C": "CN",
  26. "ST": "BeiJing",
  27. "L": "BeiJing",
  28. "O": "k8s",
  29. "OU": "opsnull"
  30. }
  31. ]
  32. }
  33. EOF
  • hosts 字段指定授权使用该证书的 IP 和域名列表,这里列出了 master 节点 IP、kubernetes 服务的 IP 和域名;

生成证书和私钥:

  1. cfssl gencert -ca=/opt/k8s/work/ca.pem \
  2. -ca-key=/opt/k8s/work/ca-key.pem \
  3. -config=/opt/k8s/work/ca-config.json \
  4. -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
  5. ls kubernetes*pem

将生成的证书和私钥文件拷贝到所有 master 节点:

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. for node_ip in ${NODE_IPS[@]}
  4. do
  5. echo ">>> ${node_ip}"
  6. ssh root@${node_ip} "mkdir -p /etc/kubernetes/cert"
  7. scp kubernetes*.pem root@${node_ip}:/etc/kubernetes/cert/
  8. done

创建加密配置文件

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. cat > encryption-config.yaml <<EOF
  4. kind: EncryptionConfig
  5. apiVersion: v1
  6. resources:
  7. - resources:
  8. - secrets
  9. providers:
  10. - aescbc:
  11. keys:
  12. - name: key1
  13. secret: ${ENCRYPTION_KEY}
  14. - identity: {}
  15. EOF

将加密配置文件拷贝到 master 节点的 /etc/kubernetes 目录下:

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. for node_ip in ${NODE_IPS[@]}
  4. do
  5. echo ">>> ${node_ip}"
  6. scp encryption-config.yaml root@${node_ip}:/etc/kubernetes/
  7. done

创建审计策略文件

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. cat > audit-policy.yaml <<EOF
  4. apiVersion: audit.k8s.io/v1beta1
  5. kind: Policy
  6. rules:
  7. # The following requests were manually identified as high-volume and low-risk, so drop them.
  8. - level: None
  9. resources:
  10. - group: ""
  11. resources:
  12. - endpoints
  13. - services
  14. - services/status
  15. users:
  16. - 'system:kube-proxy'
  17. verbs:
  18. - watch
  19. - level: None
  20. resources:
  21. - group: ""
  22. resources:
  23. - nodes
  24. - nodes/status
  25. userGroups:
  26. - 'system:nodes'
  27. verbs:
  28. - get
  29. - level: None
  30. namespaces:
  31. - kube-system
  32. resources:
  33. - group: ""
  34. resources:
  35. - endpoints
  36. users:
  37. - 'system:kube-controller-manager'
  38. - 'system:kube-scheduler'
  39. - 'system:serviceaccount:kube-system:endpoint-controller'
  40. verbs:
  41. - get
  42. - update
  43. - level: None
  44. resources:
  45. - group: ""
  46. resources:
  47. - namespaces
  48. - namespaces/status
  49. - namespaces/finalize
  50. users:
  51. - 'system:apiserver'
  52. verbs:
  53. - get
  54. # Don't log HPA fetching metrics.
  55. - level: None
  56. resources:
  57. - group: metrics.k8s.io
  58. users:
  59. - 'system:kube-controller-manager'
  60. verbs:
  61. - get
  62. - list
  63. # Don't log these read-only URLs.
  64. - level: None
  65. nonResourceURLs:
  66. - '/healthz*'
  67. - /version
  68. - '/swagger*'
  69. # Don't log events requests.
  70. - level: None
  71. resources:
  72. - group: ""
  73. resources:
  74. - events
  75. # node and pod status calls from nodes are high-volume and can be large, don't log responses
  76. # for expected updates from nodes
  77. - level: Request
  78. omitStages:
  79. - RequestReceived
  80. resources:
  81. - group: ""
  82. resources:
  83. - nodes/status
  84. - pods/status
  85. users:
  86. - kubelet
  87. - 'system:node-problem-detector'
  88. - 'system:serviceaccount:kube-system:node-problem-detector'
  89. verbs:
  90. - update
  91. - patch
  92. - level: Request
  93. omitStages:
  94. - RequestReceived
  95. resources:
  96. - group: ""
  97. resources:
  98. - nodes/status
  99. - pods/status
  100. userGroups:
  101. - 'system:nodes'
  102. verbs:
  103. - update
  104. - patch
  105. # deletecollection calls can be large, don't log responses for expected namespace deletions
  106. - level: Request
  107. omitStages:
  108. - RequestReceived
  109. users:
  110. - 'system:serviceaccount:kube-system:namespace-controller'
  111. verbs:
  112. - deletecollection
  113. # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
  114. # so only log at the Metadata level.
  115. - level: Metadata
  116. omitStages:
  117. - RequestReceived
  118. resources:
  119. - group: ""
  120. resources:
  121. - secrets
  122. - configmaps
  123. - group: authentication.k8s.io
  124. resources:
  125. - tokenreviews
  126. # Get repsonses can be large; skip them.
  127. - level: Request
  128. omitStages:
  129. - RequestReceived
  130. resources:
  131. - group: ""
  132. - group: admissionregistration.k8s.io
  133. - group: apiextensions.k8s.io
  134. - group: apiregistration.k8s.io
  135. - group: apps
  136. - group: authentication.k8s.io
  137. - group: authorization.k8s.io
  138. - group: autoscaling
  139. - group: batch
  140. - group: certificates.k8s.io
  141. - group: extensions
  142. - group: metrics.k8s.io
  143. - group: networking.k8s.io
  144. - group: policy
  145. - group: rbac.authorization.k8s.io
  146. - group: scheduling.k8s.io
  147. - group: settings.k8s.io
  148. - group: storage.k8s.io
  149. verbs:
  150. - get
  151. - list
  152. - watch
  153. # Default level for known APIs
  154. - level: RequestResponse
  155. omitStages:
  156. - RequestReceived
  157. resources:
  158. - group: ""
  159. - group: admissionregistration.k8s.io
  160. - group: apiextensions.k8s.io
  161. - group: apiregistration.k8s.io
  162. - group: apps
  163. - group: authentication.k8s.io
  164. - group: authorization.k8s.io
  165. - group: autoscaling
  166. - group: batch
  167. - group: certificates.k8s.io
  168. - group: extensions
  169. - group: metrics.k8s.io
  170. - group: networking.k8s.io
  171. - group: policy
  172. - group: rbac.authorization.k8s.io
  173. - group: scheduling.k8s.io
  174. - group: settings.k8s.io
  175. - group: storage.k8s.io
  176. # Default level for all other requests.
  177. - level: Metadata
  178. omitStages:
  179. - RequestReceived
  180. EOF

分发审计策略文件:

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. for node_ip in ${NODE_IPS[@]}
  4. do
  5. echo ">>> ${node_ip}"
  6. scp audit-policy.yaml root@${node_ip}:/etc/kubernetes/audit-policy.yaml
  7. done

创建后续访问 metrics-server 或 kube-prometheus 使用的证书

创建证书签名请求:

  1. cd /opt/k8s/work
  2. cat > proxy-client-csr.json <<EOF
  3. {
  4. "CN": "aggregator",
  5. "hosts": [],
  6. "key": {
  7. "algo": "rsa",
  8. "size": 2048
  9. },
  10. "names": [
  11. {
  12. "C": "CN",
  13. "ST": "BeiJing",
  14. "L": "BeiJing",
  15. "O": "k8s",
  16. "OU": "opsnull"
  17. }
  18. ]
  19. }
  20. EOF
  • CN 名称需要位于 kube-apiserver 的 --requestheader-allowed-names 参数中,否则后续访问 metrics 时会提示权限不足。

生成证书和私钥:

  1. cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  2. -ca-key=/etc/kubernetes/cert/ca-key.pem \
  3. -config=/etc/kubernetes/cert/ca-config.json \
  4. -profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client
  5. ls proxy-client*.pem

将生成的证书和私钥文件拷贝到所有 master 节点:

  1. source /opt/k8s/bin/environment.sh
  2. for node_ip in ${NODE_IPS[@]}
  3. do
  4. echo ">>> ${node_ip}"
  5. scp proxy-client*.pem root@${node_ip}:/etc/kubernetes/cert/
  6. done

创建 kube-apiserver systemd unit 模板文件

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. cat > kube-apiserver.service.template <<EOF
  4. [Unit]
  5. Description=Kubernetes API Server
  6. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  7. After=network.target
  8. [Service]
  9. WorkingDirectory=${K8S_DIR}/kube-apiserver
  10. ExecStart=/opt/k8s/bin/kube-apiserver \\
  11. --advertise-address=##NODE_IP## \\
  12. --default-not-ready-toleration-seconds=360 \\
  13. --default-unreachable-toleration-seconds=360 \\
  14. --feature-gates=DynamicAuditing=true \\
  15. --max-mutating-requests-inflight=2000 \\
  16. --max-requests-inflight=4000 \\
  17. --default-watch-cache-size=200 \\
  18. --delete-collection-workers=2 \\
  19. --encryption-provider-config=/etc/kubernetes/encryption-config.yaml \\
  20. --etcd-cafile=/etc/kubernetes/cert/ca.pem \\
  21. --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \\
  22. --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \\
  23. --etcd-servers=${ETCD_ENDPOINTS} \\
  24. --bind-address=##NODE_IP## \\
  25. --secure-port=6443 \\
  26. --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \\
  27. --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \\
  28. --insecure-port=0 \\
  29. --audit-dynamic-configuration \\
  30. --audit-log-maxage=15 \\
  31. --audit-log-maxbackup=3 \\
  32. --audit-log-maxsize=100 \\
  33. --audit-log-truncate-enabled \\
  34. --audit-log-path=${K8S_DIR}/kube-apiserver/audit.log \\
  35. --audit-policy-file=/etc/kubernetes/audit-policy.yaml \\
  36. --profiling \\
  37. --anonymous-auth=false \\
  38. --client-ca-file=/etc/kubernetes/cert/ca.pem \\
  39. --enable-bootstrap-token-auth \\
  40. --requestheader-allowed-names="aggregator" \\
  41. --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
  42. --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  43. --requestheader-group-headers=X-Remote-Group \\
  44. --requestheader-username-headers=X-Remote-User \\
  45. --service-account-key-file=/etc/kubernetes/cert/ca.pem \\
  46. --authorization-mode=Node,RBAC \\
  47. --runtime-config=api/all=true \\
  48. --enable-admission-plugins=NodeRestriction \\
  49. --allow-privileged=true \\
  50. --apiserver-count=3 \\
  51. --event-ttl=168h \\
  52. --kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem \\
  53. --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \\
  54. --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \\
  55. --kubelet-https=true \\
  56. --kubelet-timeout=10s \\
  57. --proxy-client-cert-file=/etc/kubernetes/cert/proxy-client.pem \\
  58. --proxy-client-key-file=/etc/kubernetes/cert/proxy-client-key.pem \\
  59. --service-cluster-ip-range=${SERVICE_CIDR} \\
  60. --service-node-port-range=${NODE_PORT_RANGE} \\
  61. --logtostderr=true \\
  62. --v=2
  63. Restart=on-failure
  64. RestartSec=10
  65. Type=notify
  66. LimitNOFILE=65536
  67. [Install]
  68. WantedBy=multi-user.target
  69. EOF
  • --advertise-address:apiserver 对外通告的 IP(kubernetes 服务后端节点 IP);
  • --default-*-toleration-seconds:设置节点异常相关的阈值;
  • --max-*-requests-inflight:请求相关的最大阈值;
  • --etcd-*:访问 etcd 的证书和 etcd 服务器地址;
  • --bind-address: https 监听的 IP,不能为 127.0.0.1,否则外界不能访问它的安全端口 6443;
  • --secret-port:https 监听端口;
  • --insecure-port=0:关闭监听 http 非安全端口(8080);
  • --tls-*-file:指定 apiserver 使用的证书、私钥和 CA 文件;
  • --audit-*:配置审计策略和审计日志文件相关的参数;
  • --client-ca-file:验证 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)请求所带的证书;
  • --enable-bootstrap-token-auth:启用 kubelet bootstrap 的 token 认证;
  • --requestheader-*:kube-apiserver 的 aggregator layer 相关的配置参数,proxy-client & HPA 需要使用;
  • --requestheader-client-ca-file:用于签名 --proxy-client-cert-file--proxy-client-key-file 指定的证书;在启用了 metric aggregator 时使用;
  • --requestheader-allowed-names:不能为空,值为逗号分割的 --proxy-client-cert-file 证书的 CN 名称,这里设置为 “aggregator”;
  • --service-account-key-file:签名 ServiceAccount Token 的公钥文件,kube-controller-manager 的 --service-account-private-key-file 指定私钥文件,两者配对使用;
  • --runtime-config=api/all=true: 启用所有版本的 APIs,如 autoscaling/v2alpha1;
  • --authorization-mode=Node,RBAC--anonymous-auth=false: 开启 Node 和 RBAC 授权模式,拒绝未授权的请求;
  • --enable-admission-plugins:启用一些默认关闭的 plugins;
  • --allow-privileged:运行执行 privileged 权限的容器;
  • --apiserver-count=3:指定 apiserver 实例的数量;
  • --event-ttl:指定 events 的保存时间;
  • --kubelet-*:如果指定,则使用 https 访问 kubelet APIs;需要为证书对应的用户(上面 kubernetes*.pem 证书的用户为 kubernetes) 用户定义 RBAC 规则,否则访问 kubelet API 时提示未授权;
  • --proxy-client-*:apiserver 访问 metrics-server 使用的证书;
  • --service-cluster-ip-range: 指定 Service Cluster IP 地址段;
  • --service-node-port-range: 指定 NodePort 的端口范围;

如果 kube-apiserver 机器没有运行 kube-proxy,则还需要添加 --enable-aggregator-routing=true 参数;

关于 --requestheader-XXX 相关参数,参考:

注意:

  1. --requestheader-client-ca-file 指定的 CA 证书,必须具有 client auth and server auth
  2. 如果 --requestheader-allowed-names 不为空,且 --proxy-client-cert-file 证书的 CN 名称不在 allowed-names 中,则后续查看 node 或 pods 的 metrics 失败,提示:
    1. $ kubectl top nodes
    2. Error from server (Forbidden): nodes.metrics.k8s.io is forbidden: User "aggregator" cannot list resource "nodes" in API group "metrics.k8s.io" at the cluster scope

为各节点创建和分发 kube-apiserver systemd unit 文件

替换模板文件中的变量,为各节点生成 systemd unit 文件:

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. for (( i=0; i < 3; i++ ))
  4. do
  5. sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-apiserver.service.template > kube-apiserver-${NODE_IPS[i]}.service
  6. done
  7. ls kube-apiserver*.service
  • NODE_NAMES 和 NODE_IPS 为相同长度的 bash 数组,分别为节点名称和对应的 IP;

分发生成的 systemd unit 文件:

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. for node_ip in ${NODE_IPS[@]}
  4. do
  5. echo ">>> ${node_ip}"
  6. scp kube-apiserver-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-apiserver.service
  7. done

启动 kube-apiserver 服务

  1. source /opt/k8s/bin/environment.sh
  2. for node_ip in ${NODE_IPS[@]}
  3. do
  4. echo ">>> ${node_ip}"
  5. ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-apiserver"
  6. ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver"
  7. done

检查 kube-apiserver 运行状态

  1. source /opt/k8s/bin/environment.sh
  2. for node_ip in ${NODE_IPS[@]}
  3. do
  4. echo ">>> ${node_ip}"
  5. ssh root@${node_ip} "systemctl status kube-apiserver |grep 'Active:'"
  6. done

确保状态为 active (running),否则查看日志,确认原因:

  1. journalctl -u kube-apiserver

检查集群状态

  1. $ kubectl cluster-info
  2. Kubernetes master is running at https://172.27.138.251:6443
  3. To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
  4. $ kubectl get all --all-namespaces
  5. NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  6. default service/kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 3m53s
  7. $ kubectl get componentstatuses
  8. NAME AGE
  9. controller-manager <unknown>
  10. scheduler <unknown>
  11. etcd-0 <unknown>
  12. etcd-2 <unknown>
  13. etcd-1 <unknown>
  • Kubernetes 1.16.6 存在 Bugs 导致返回结果一直为 <unknown>,但 kubectl get cs -o yaml 可以返回正确结果;

检查 kube-apiserver 监听的端口

  1. $ sudo netstat -lnpt|grep kube
  2. tcp 0 0 172.27.138.251:6443 0.0.0.0:* LISTEN 101442/kube-apiserv
  • 6443: 接收 https 请求的安全端口,对所有请求做认证和授权;
  • 由于关闭了非安全端口,故没有监听 8080;