设置 Konnectivity 服务

Konnectivity 服务为控制平面提供集群通信的 TCP 级别代理。

准备开始

你必须拥有一个 Kubernetes 的集群,同时你的 Kubernetes 集群必须带有 kubectl 命令行工具。 如果你还没有集群,你可以通过 Minikube 构建一 个你自己的集群,或者你可以使用下面任意一个 Kubernetes 工具构建:

配置 Konnectivity 服务

接下来的步骤需要出口配置,比如:

admin/konnectivity/egress-selector-configuration.yaml 设置 Konnectivity 服务 - 图1

  1. apiVersion: apiserver.k8s.io/v1beta1
  2. kind: EgressSelectorConfiguration
  3. egressSelections:
  4. # Since we want to control the egress traffic to the cluster, we use the
  5. # "cluster" as the name. Other supported values are "etcd", and "master".
  6. - name: cluster
  7. connection:
  8. # This controls the protocol between the API Server and the Konnectivity
  9. # server. Supported values are "GRPC" and "HTTPConnect". There is no
  10. # end user visible difference between the two modes. You need to set the
  11. # Konnectivity server to work in the same mode.
  12. proxyProtocol: GRPC
  13. transport:
  14. # This controls what transport the API Server uses to communicate with the
  15. # Konnectivity server. UDS is recommended if the Konnectivity server
  16. # locates on the same machine as the API Server. You need to configure the
  17. # Konnectivity server to listen on the same UDS socket.
  18. # The other supported transport is "tcp". You will need to set up TLS
  19. # config to secure the TCP transport.
  20. uds:
  21. udsName: /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket

您需要配置 API 服务器来使用 Konnectivity 服务,并将网络流量定向到集群节点:

  1. 创建一个出口配置文件比如 admin/konnectivity/egress-selector-configuration.yaml
  2. 将 API 服务器的 --egress-selector-config-file 参数设置为你的 API 服务器的出口配置文件路径。

接下来,你需要部署 Konnectivity 服务器和代理。kubernetes-sigs/apiserver-network-proxy 是参考实现。

在控制平面节点上部署 Konnectivity 服务,下面提供的 konnectivity-server.yaml 配置清单假定您在集群中 将 Kubernetes 组件都是部署为静态 Pod。如果不是,你可以将 Konnectivity 服务部署为 DaemonSet。

admin/konnectivity/konnectivity-server.yaml 设置 Konnectivity 服务 - 图2

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: konnectivity-server
  5. namespace: kube-system
  6. spec:
  7. priorityClassName: system-cluster-critical
  8. hostNetwork: true
  9. containers:
  10. - name: konnectivity-server-container
  11. image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-server:v0.0.8
  12. command: ["/proxy-server"]
  13. args: [
  14. "--log-file=/var/log/konnectivity-server.log",
  15. "--logtostderr=false",
  16. "--log-file-max-size=0",
  17. # This needs to be consistent with the value set in egressSelectorConfiguration.
  18. "--uds-name=/etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket",
  19. # The following two lines assume the Konnectivity server is
  20. # deployed on the same machine as the apiserver, and the certs and
  21. # key of the API Server are at the specified location.
  22. "--cluster-cert=/etc/srv/kubernetes/pki/apiserver.crt",
  23. "--cluster-key=/etc/srv/kubernetes/pki/apiserver.key",
  24. # This needs to be consistent with the value set in egressSelectorConfiguration.
  25. "--mode=grpc",
  26. "--server-port=0",
  27. "--agent-port=8132",
  28. "--admin-port=8133",
  29. "--agent-namespace=kube-system",
  30. "--agent-service-account=konnectivity-agent",
  31. "--kubeconfig=/etc/srv/kubernetes/konnectivity-server/kubeconfig",
  32. "--authentication-audience=system:konnectivity-server"
  33. ]
  34. livenessProbe:
  35. httpGet:
  36. scheme: HTTP
  37. host: 127.0.0.1
  38. port: 8133
  39. path: /healthz
  40. initialDelaySeconds: 30
  41. timeoutSeconds: 60
  42. ports:
  43. - name: agentport
  44. containerPort: 8132
  45. hostPort: 8132
  46. - name: adminport
  47. containerPort: 8133
  48. hostPort: 8133
  49. volumeMounts:
  50. - name: varlogkonnectivityserver
  51. mountPath: /var/log/konnectivity-server.log
  52. readOnly: false
  53. - name: pki
  54. mountPath: /etc/srv/kubernetes/pki
  55. readOnly: true
  56. - name: konnectivity-uds
  57. mountPath: /etc/srv/kubernetes/konnectivity-server
  58. readOnly: false
  59. volumes:
  60. - name: varlogkonnectivityserver
  61. hostPath:
  62. path: /var/log/konnectivity-server.log
  63. type: FileOrCreate
  64. - name: pki
  65. hostPath:
  66. path: /etc/srv/kubernetes/pki
  67. - name: konnectivity-uds
  68. hostPath:
  69. path: /etc/srv/kubernetes/konnectivity-server
  70. type: DirectoryOrCreate

在您的集群中部署 Konnectivity 代理:

admin/konnectivity/konnectivity-agent.yaml 设置 Konnectivity 服务 - 图3

  1. apiVersion: apps/v1
  2. # Alternatively, you can deploy the agents as Deployments. It is not necessary
  3. # to have an agent on each node.
  4. kind: DaemonSet
  5. metadata:
  6. labels:
  7. addonmanager.kubernetes.io/mode: Reconcile
  8. k8s-app: konnectivity-agent
  9. namespace: kube-system
  10. name: konnectivity-agent
  11. spec:
  12. selector:
  13. matchLabels:
  14. k8s-app: konnectivity-agent
  15. template:
  16. metadata:
  17. labels:
  18. k8s-app: konnectivity-agent
  19. spec:
  20. priorityClassName: system-cluster-critical
  21. tolerations:
  22. - key: "CriticalAddonsOnly"
  23. operator: "Exists"
  24. containers:
  25. - image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent:v0.0.8
  26. name: konnectivity-agent
  27. command: ["/proxy-agent"]
  28. args: [
  29. "--logtostderr=true",
  30. "--ca-cert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt",
  31. # Since the konnectivity server runs with hostNetwork=true,
  32. # this is the IP address of the master machine.
  33. "--proxy-server-host=35.225.206.7",
  34. "--proxy-server-port=8132",
  35. "--service-account-token-path=/var/run/secrets/tokens/konnectivity-agent-token"
  36. ]
  37. volumeMounts:
  38. - mountPath: /var/run/secrets/tokens
  39. name: konnectivity-agent-token
  40. livenessProbe:
  41. httpGet:
  42. port: 8093
  43. path: /healthz
  44. initialDelaySeconds: 15
  45. timeoutSeconds: 15
  46. serviceAccountName: konnectivity-agent
  47. volumes:
  48. - name: konnectivity-agent-token
  49. projected:
  50. sources:
  51. - serviceAccountToken:
  52. path: konnectivity-agent-token
  53. audience: system:konnectivity-server

最后,如果您的集群开启了 RBAC,请创建相关的 RBAC 规则:

admin/konnectivity/konnectivity-rbac.yaml 设置 Konnectivity 服务 - 图4

  1. apiVersion: rbac.authorization.k8s.io/v1
  2. kind: ClusterRoleBinding
  3. metadata:
  4. name: system:konnectivity-server
  5. labels:
  6. kubernetes.io/cluster-service: "true"
  7. addonmanager.kubernetes.io/mode: Reconcile
  8. roleRef:
  9. apiGroup: rbac.authorization.k8s.io
  10. kind: ClusterRole
  11. name: system:auth-delegator
  12. subjects:
  13. - apiGroup: rbac.authorization.k8s.io
  14. kind: User
  15. name: system:konnectivity-server
  16. ---
  17. apiVersion: v1
  18. kind: ServiceAccount
  19. metadata:
  20. name: konnectivity-agent
  21. namespace: kube-system
  22. labels:
  23. kubernetes.io/cluster-service: "true"
  24. addonmanager.kubernetes.io/mode: Reconcile