部署 Node 节点组件

本章节仅以k8snode1节点为例。

环境准备

  1. # 内网需要配置代理
  2. $ dnf install -y docker iSulad conntrack-tools socat containernetworking-plugins
  3. $ swapoff -a
  4. $ mkdir -p /etc/kubernetes/pki/
  5. $ mkdir -p /etc/cni/net.d
  6. $ mkdir -p /opt/cni
  7. # 删除默认kubeconfig
  8. $ rm /etc/kubernetes/kubelet.kubeconfig
  9. ## 使用isulad作为运行时 ########
  10. # 配置iSulad
  11. cat /etc/isulad/daemon.json
  12. {
  13. "registry-mirrors": [
  14. "docker.io"
  15. ],
  16. "insecure-registries": [
  17. "k8s.gcr.io",
  18. "quay.io"
  19. ],
  20. "pod-sandbox-image": "k8s.gcr.io/pause:3.2",# pause类型
  21. "network-plugin": "cni", # 置空表示禁用cni网络插件则下面两个路径失效, 安装插件后重启isulad即可
  22. "cni-bin-dir": "/usr/libexec/cni/",
  23. "cni-conf-dir": "/etc/cni/net.d",
  24. }
  25. # 在iSulad环境变量中添加代理,下载镜像
  26. cat /usr/lib/systemd/system/isulad.service
  27. [Service]
  28. Type=notify
  29. Environment="HTTP_PROXY=http://name:password@proxy:8080"
  30. Environment="HTTPS_PROXY=http://name:password@proxy:8080"
  31. # 重启iSulad并设置为开机自启
  32. systemctl daemon-reload
  33. systemctl restart isulad
  34. ## 如果使用docker作为运行时 ########
  35. $ dnf install -y docker
  36. # 如果需要代理的环境,可以给docker配置代理,新增配置文件http-proxy.conf,并编写如下内容,替换name,password和proxy-addr为实际的配置。
  37. $ cat /etc/systemd/system/docker.service.d/http-proxy.conf
  38. [Service]
  39. Environment="HTTP_PROXY=http://name:password@proxy-addr:8080"
  40. $ systemctl daemon-reload
  41. $ systemctl restart docker

创建 kubeconfig 配置文件

对各节点依次如下操作创建配置文件:

  1. $ kubectl config set-cluster openeuler-k8s \
  2. --certificate-authority=/etc/kubernetes/pki/ca.pem \
  3. --embed-certs=true \
  4. --server=https://192.168.122.154:6443 \
  5. --kubeconfig=k8snode1.kubeconfig
  6. $ kubectl config set-credentials system:node:k8snode1 \
  7. --client-certificate=/etc/kubernetes/pki/k8snode1.pem \
  8. --client-key=/etc/kubernetes/pki/k8snode1-key.pem \
  9. --embed-certs=true \
  10. --kubeconfig=k8snode1.kubeconfig
  11. $ kubectl config set-context default \
  12. --cluster=openeuler-k8s \
  13. --user=system:node:k8snode1 \
  14. --kubeconfig=k8snode1.kubeconfig
  15. $ kubectl config use-context default --kubeconfig=k8snode1.kubeconfig

注:修改k8snode1为对应节点名

拷贝证书

和控制面一样,所有证书、密钥和相关配置都放到/etc/kubernetes/pki/目录。

  1. $ ls /etc/kubernetes/pki/
  2. ca.pem k8snode1.kubeconfig kubelet_config.yaml kube-proxy-key.pem kube-proxy.pem
  3. k8snode1-key.pem k8snode1.pem kube_proxy_config.yaml kube-proxy.kubeconfig

CNI 网络配置

先通过 containernetworking-plugins 作为 kubelet 使用的 cni 插件,后续可以引入 calico,flannel 等插件,增强集群的网络能力。

  1. # 桥网络配置
  2. $ cat /etc/cni/net.d/10-bridge.conf
  3. {
  4. "cniVersion": "0.3.1",
  5. "name": "bridge",
  6. "type": "bridge",
  7. "bridge": "cnio0",
  8. "isGateway": true,
  9. "ipMasq": true,
  10. "ipam": {
  11. "type": "host-local",
  12. "subnet": "10.244.0.0/16",
  13. "gateway": "10.244.0.1"
  14. },
  15. "dns": {
  16. "nameservers": [
  17. "10.244.0.1"
  18. ]
  19. }
  20. }
  21. # 回环网络配置
  22. $ cat /etc/cni/net.d/99-loopback.conf
  23. {
  24. "cniVersion": "0.3.1",
  25. "name": "lo",
  26. "type": "loopback"
  27. }

部署 kubelet 服务

kubelet 依赖的配置文件

  1. $ cat /etc/kubernetes/pki/kubelet_config.yaml
  2. kind: KubeletConfiguration
  3. apiVersion: kubelet.config.k8s.io/v1beta1
  4. authentication:
  5. anonymous:
  6. enabled: false
  7. webhook:
  8. enabled: true
  9. x509:
  10. clientCAFile: /etc/kubernetes/pki/ca.pem
  11. authorization:
  12. mode: Webhook
  13. clusterDNS:
  14. - 10.32.0.10
  15. clusterDomain: cluster.local
  16. runtimeRequestTimeout: "15m"
  17. tlsCertFile: "/etc/kubernetes/pki/k8snode1.pem"
  18. tlsPrivateKeyFile: "/etc/kubernetes/pki/k8snode1-key.pem"

注意:clusterDNS 的地址为:10.32.0.10,必须和之前设置的 service-cluster-ip-range 一致

编写 systemd 配置文件

  1. $ cat /usr/lib/systemd/system/kubelet.service
  2. [Unit]
  3. Description=kubelet: The Kubernetes Node Agent
  4. Documentation=https://kubernetes.io/docs/
  5. Wants=network-online.target
  6. After=network-online.target
  7. [Service]
  8. ExecStart=/usr/bin/kubelet \
  9. --config=/etc/kubernetes/pki/kubelet_config.yaml \
  10. --network-plugin=cni \
  11. --pod-infra-container-image=k8s.gcr.io/pause:3.2 \
  12. --kubeconfig=/etc/kubernetes/pki/k8snode1.kubeconfig \
  13. --register-node=true \
  14. --hostname-override=k8snode1 \
  15. --cni-bin-dir="/usr/libexec/cni/" \
  16. --v=2
  17. Restart=always
  18. StartLimitInterval=0
  19. RestartSec=10
  20. [Install]
  21. WantedBy=multi-user.target

注意:如果使用isulad作为runtime,需要增加如下配置

  1. --container-runtime=remote \
  2. --container-runtime-endpoint=unix:///var/run/isulad.sock \

部署 kube-proxy

kube-proxy 依赖的配置文件

  1. cat /etc/kubernetes/pki/kube_proxy_config.yaml
  2. kind: KubeProxyConfiguration
  3. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  4. clientConnection:
  5. kubeconfig: /etc/kubernetes/pki/kube-proxy.kubeconfig
  6. clusterCIDR: 10.244.0.0/16
  7. mode: "iptables"

编写 systemd 配置文件

  1. $ cat /usr/lib/systemd/system/kube-proxy.service
  2. [Unit]
  3. Description=Kubernetes Kube-Proxy Server
  4. Documentation=https://kubernetes.io/docs/reference/generated/kube-proxy/
  5. After=network.target
  6. [Service]
  7. EnvironmentFile=-/etc/kubernetes/config
  8. EnvironmentFile=-/etc/kubernetes/proxy
  9. ExecStart=/usr/bin/kube-proxy \
  10. $KUBE_LOGTOSTDERR \
  11. $KUBE_LOG_LEVEL \
  12. --config=/etc/kubernetes/pki/kube_proxy_config.yaml \
  13. --hostname-override=k8snode1 \
  14. $KUBE_PROXY_ARGS
  15. Restart=on-failure
  16. LimitNOFILE=65536
  17. [Install]
  18. WantedBy=multi-user.target

启动组件服务

  1. $ systemctl enable kubelet kube-proxy
  2. $ systemctl start kubelet kube-proxy

其他节点依次部署即可。

验证集群状态

等待几分钟,使用如下命令查看node状态:

  1. $ kubectl get nodes --kubeconfig /etc/kubernetes/pki/admin.kubeconfig
  2. NAME STATUS ROLES AGE VERSION
  3. k8snode1 Ready <none> 17h v1.20.2
  4. k8snode2 Ready <none> 19m v1.20.2
  5. k8snode3 Ready <none> 12m v1.20.2

部署 coredns

coredns可以部署到node节点或者master节点,本文这里部署到节点k8snode1

编写 coredns 配置文件

  1. $ cat /etc/kubernetes/pki/dns/Corefile
  2. .:53 {
  3. errors
  4. health {
  5. lameduck 5s
  6. }
  7. ready
  8. kubernetes cluster.local in-addr.arpa ip6.arpa {
  9. pods insecure
  10. endpoint https://192.168.122.154:6443
  11. tls /etc/kubernetes/pki/ca.pem /etc/kubernetes/pki/admin-key.pem /etc/kubernetes/pki/admin.pem
  12. kubeconfig /etc/kubernetes/pki/admin.kubeconfig default
  13. fallthrough in-addr.arpa ip6.arpa
  14. }
  15. prometheus :9153
  16. forward . /etc/resolv.conf {
  17. max_concurrent 1000
  18. }
  19. cache 30
  20. loop
  21. reload
  22. loadbalance
  23. }

说明:

  • 监听53端口;
  • 设置kubernetes插件配置:证书、kube api的URL;

准备 systemd 的 service 文件

  1. cat /usr/lib/systemd/system/coredns.service
  2. [Unit]
  3. Description=Kubernetes Core DNS server
  4. Documentation=https://github.com/coredns/coredns
  5. After=network.target
  6. [Service]
  7. ExecStart=bash -c "KUBE_DNS_SERVICE_HOST=10.32.0.10 coredns -conf /etc/kubernetes/pki/dns/Corefile"
  8. Restart=on-failure
  9. LimitNOFILE=65536
  10. [Install]
  11. WantedBy=multi-user.target

启动服务

  1. $ systemctl enable coredns
  2. $ systemctl start coredns

创建 coredns 的 Service 对象

  1. $ cat coredns_server.yaml
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: kube-dns
  6. namespace: kube-system
  7. annotations:
  8. prometheus.io/port: "9153"
  9. prometheus.io/scrape: "true"
  10. labels:
  11. k8s-app: kube-dns
  12. kubernetes.io/cluster-service: "true"
  13. kubernetes.io/name: "CoreDNS"
  14. spec:
  15. clusterIP: 10.32.0.10
  16. ports:
  17. - name: dns
  18. port: 53
  19. protocol: UDP
  20. - name: dns-tcp
  21. port: 53
  22. protocol: TCP
  23. - name: metrics
  24. port: 9153
  25. protocol: TCP

创建 coredns 的 endpoint 对象

  1. $ cat coredns_ep.yaml
  2. apiVersion: v1
  3. kind: Endpoints
  4. metadata:
  5. name: kube-dns
  6. namespace: kube-system
  7. subsets:
  8. - addresses:
  9. - ip: 192.168.122.157
  10. ports:
  11. - name: dns-tcp
  12. port: 53
  13. protocol: TCP
  14. - name: dns
  15. port: 53
  16. protocol: UDP
  17. - name: metrics
  18. port: 9153
  19. protocol: TCP

确认 coredns 服务

  1. # 查看service对象
  2. $ kubectl get service -n kube-system kube-dns
  3. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  4. kube-dns ClusterIP 10.32.0.10 <none> 53/UDP,53/TCP,9153/TCP 51m
  5. # 查看endpoint对象
  6. $ kubectl get endpoints -n kube-system kube-dns
  7. NAME ENDPOINTS AGE
  8. kube-dns 192.168.122.157:53,192.168.122.157:53,192.168.122.157:9153 52m