设置 Konnectivity 服务

Konnectivity 服务为控制平面提供集群通信的 TCP 级别代理。

Before you begin

你需要有一个 Kubernetes 集群,并且 kubectl 命令可以与集群通信。 建议在至少有两个不充当控制平面主机的节点的集群上运行本教程。 如果你还没有集群,可以使用 minikube 创建一个集群。

配置 Konnectivity 服务

接下来的步骤需要出口配置,比如:

admin/konnectivity/egress-selector-configuration.yaml

  1. apiVersion: apiserver.k8s.io/v1beta1
  2. kind: EgressSelectorConfiguration
  3. egressSelections:
  4. # Since we want to control the egress traffic to the cluster, we use the
  5. # "cluster" as the name. Other supported values are "etcd", and "master".
  6. - name: cluster
  7. connection:
  8. # This controls the protocol between the API Server and the Konnectivity
  9. # server. Supported values are "GRPC" and "HTTPConnect". There is no
  10. # end user visible difference between the two modes. You need to set the
  11. # Konnectivity server to work in the same mode.
  12. proxyProtocol: GRPC
  13. transport:
  14. # This controls what transport the API Server uses to communicate with the
  15. # Konnectivity server. UDS is recommended if the Konnectivity server
  16. # locates on the same machine as the API Server. You need to configure the
  17. # Konnectivity server to listen on the same UDS socket.
  18. # The other supported transport is "tcp". You will need to set up TLS
  19. # config to secure the TCP transport.
  20. uds:
  21. udsName: /etc/kubernetes/konnectivity-server/konnectivity-server.socket

你需要配置 API 服务器来使用 Konnectivity 服务,并将网络流量定向到集群节点:

确保服务账号令牌卷投射 特性被启用。该特性自 Kubernetes v1.20 起默认已被启用。

  1. 创建一个出站流量配置文件,比如 admin/konnectivity/egress-selector-configuration.yaml
  2. 将 API 服务器的 --egress-selector-config-file 参数设置为你的 API 服务器的 离站流量配置文件路径。
  3. 如果你在使用 UDS 连接,须将卷配置添加到 kube-apiserver:

    1. spec:
    2. containers:
    3. volumeMounts:
    4. - name: konnectivity-uds
    5. mountPath: /etc/kubernetes/konnectivity-server
    6. readOnly: false
    7. volumes:
    8. - name: konnectivity-uds
    9. hostPath:
    10. path: /etc/kubernetes/konnectivity-server
    11. type: DirectoryOrCreate

为 konnectivity-server 生成或者取得证书和 kubeconfig 文件。 例如,你可以使用 OpenSSL 命令行工具,基于存放在某控制面主机上 /etc/kubernetes/pki/ca.crt 文件中的集群 CA 证书来 发放一个 X.509 证书,

  1. openssl req -subj "/CN=system:konnectivity-server" -new -newkey rsa:2048 -nodes -out konnectivity.csr -keyout konnectivity.key -out konnectivity.csr
  2. openssl x509 -req -in konnectivity.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out konnectivity.crt -days 375 -sha256
  3. SERVER=$(kubectl config view -o jsonpath='{.clusters..server}')
  4. kubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-credentials system:konnectivity-server --client-certificate konnectivity.crt --client-key konnectivity.key --embed-certs=true
  5. kubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-cluster kubernetes --server "$SERVER" --certificate-authority /etc/kubernetes/pki/ca.crt --embed-certs=true
  6. kubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-context system:konnectivity-server@kubernetes --cluster kubernetes --user system:konnectivity-server
  7. kubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config use-context system:konnectivity-server@kubernetes
  8. rm -f konnectivity.crt konnectivity.key konnectivity.csr

接下来,你需要部署 Konnectivity 服务器和代理。 kubernetes-sigs/apiserver-network-proxy 是一个参考实现。

在控制面节点上部署 Konnectivity 服务。 下面提供的 konnectivity-server.yaml 配置清单假定在你的集群中 Kubernetes 组件都是部署为静态 Pod 的。 如果不是,你可以将 Konnectivity 服务部署为 DaemonSet。

admin/konnectivity/konnectivity-server.yaml

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: konnectivity-server
  5. namespace: kube-system
  6. spec:
  7. priorityClassName: system-cluster-critical
  8. hostNetwork: true
  9. containers:
  10. - name: konnectivity-server-container
  11. image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-server:v0.0.16
  12. command: ["/proxy-server"]
  13. args: [
  14. "--logtostderr=true",
  15. # This needs to be consistent with the value set in egressSelectorConfiguration.
  16. "--uds-name=/etc/kubernetes/konnectivity-server/konnectivity-server.socket",
  17. # The following two lines assume the Konnectivity server is
  18. # deployed on the same machine as the apiserver, and the certs and
  19. # key of the API Server are at the specified location.
  20. "--cluster-cert=/etc/kubernetes/pki/apiserver.crt",
  21. "--cluster-key=/etc/kubernetes/pki/apiserver.key",
  22. # This needs to be consistent with the value set in egressSelectorConfiguration.
  23. "--mode=grpc",
  24. "--server-port=0",
  25. "--agent-port=8132",
  26. "--admin-port=8133",
  27. "--health-port=8134",
  28. "--agent-namespace=kube-system",
  29. "--agent-service-account=konnectivity-agent",
  30. "--kubeconfig=/etc/kubernetes/konnectivity-server.conf",
  31. "--authentication-audience=system:konnectivity-server"
  32. ]
  33. livenessProbe:
  34. httpGet:
  35. scheme: HTTP
  36. host: 127.0.0.1
  37. port: 8134
  38. path: /healthz
  39. initialDelaySeconds: 30
  40. timeoutSeconds: 60
  41. ports:
  42. - name: agentport
  43. containerPort: 8132
  44. hostPort: 8132
  45. - name: adminport
  46. containerPort: 8133
  47. hostPort: 8133
  48. - name: healthport
  49. containerPort: 8134
  50. hostPort: 8134
  51. volumeMounts:
  52. - name: k8s-certs
  53. mountPath: /etc/kubernetes/pki
  54. readOnly: true
  55. - name: kubeconfig
  56. mountPath: /etc/kubernetes/konnectivity-server.conf
  57. readOnly: true
  58. - name: konnectivity-uds
  59. mountPath: /etc/kubernetes/konnectivity-server
  60. readOnly: false
  61. volumes:
  62. - name: k8s-certs
  63. hostPath:
  64. path: /etc/kubernetes/pki
  65. - name: kubeconfig
  66. hostPath:
  67. path: /etc/kubernetes/konnectivity-server.conf
  68. type: FileOrCreate
  69. - name: konnectivity-uds
  70. hostPath:
  71. path: /etc/kubernetes/konnectivity-server
  72. type: DirectoryOrCreate

在你的集群中部署 Konnectivity 代理:

admin/konnectivity/konnectivity-agent.yaml

  1. apiVersion: apps/v1
  2. # Alternatively, you can deploy the agents as Deployments. It is not necessary
  3. # to have an agent on each node.
  4. kind: DaemonSet
  5. metadata:
  6. labels:
  7. addonmanager.kubernetes.io/mode: Reconcile
  8. k8s-app: konnectivity-agent
  9. namespace: kube-system
  10. name: konnectivity-agent
  11. spec:
  12. selector:
  13. matchLabels:
  14. k8s-app: konnectivity-agent
  15. template:
  16. metadata:
  17. labels:
  18. k8s-app: konnectivity-agent
  19. spec:
  20. priorityClassName: system-cluster-critical
  21. tolerations:
  22. - key: "CriticalAddonsOnly"
  23. operator: "Exists"
  24. containers:
  25. - image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent:v0.0.16
  26. name: konnectivity-agent
  27. command: ["/proxy-agent"]
  28. args: [
  29. "--logtostderr=true",
  30. "--ca-cert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt",
  31. # Since the konnectivity server runs with hostNetwork=true,
  32. # this is the IP address of the master machine.
  33. "--proxy-server-host=35.225.206.7",
  34. "--proxy-server-port=8132",
  35. "--admin-server-port=8133",
  36. "--health-server-port=8134",
  37. "--service-account-token-path=/var/run/secrets/tokens/konnectivity-agent-token"
  38. ]
  39. volumeMounts:
  40. - mountPath: /var/run/secrets/tokens
  41. name: konnectivity-agent-token
  42. livenessProbe:
  43. httpGet:
  44. port: 8134
  45. path: /healthz
  46. initialDelaySeconds: 15
  47. timeoutSeconds: 15
  48. serviceAccountName: konnectivity-agent
  49. volumes:
  50. - name: konnectivity-agent-token
  51. projected:
  52. sources:
  53. - serviceAccountToken:
  54. path: konnectivity-agent-token
  55. audience: system:konnectivity-server

最后,如果你的集群启用了 RBAC,请创建相关的 RBAC 规则:

admin/konnectivity/konnectivity-rbac.yaml

  1. apiVersion: rbac.authorization.k8s.io/v1
  2. kind: ClusterRoleBinding
  3. metadata:
  4. name: system:konnectivity-server
  5. labels:
  6. kubernetes.io/cluster-service: "true"
  7. addonmanager.kubernetes.io/mode: Reconcile
  8. roleRef:
  9. apiGroup: rbac.authorization.k8s.io
  10. kind: ClusterRole
  11. name: system:auth-delegator
  12. subjects:
  13. - apiGroup: rbac.authorization.k8s.io
  14. kind: User
  15. name: system:konnectivity-server
  16. ---
  17. apiVersion: v1
  18. kind: ServiceAccount
  19. metadata:
  20. name: konnectivity-agent
  21. namespace: kube-system
  22. labels:
  23. kubernetes.io/cluster-service: "true"
  24. addonmanager.kubernetes.io/mode: Reconcile

最后修改 April 14, 2022 at 11:35 PM PST: [zh] update outdate setup-konnectivity file (9b4516d8f)