服务拓扑

使用服务拓扑实现边缘流量闭环能力

服务拓扑(Service Topology)可以让一个服务根据集群的节点拓扑进行流量路由。 例如,一个服务可以指定流量被优先路由到和客户端 pod 相同的节点或者节点池上。

通过在原生的 Service 上添加 Annotation 实现流量的拓扑配置,相关参数如下所示:

annotation Keyannotation Value说明
openyurt.io/topologyKeyskubernetes.io/hostname流量被路由到相同的节点
openyurt.io/topologyKeysopenyurt.io/nodepool

kubernetes.io/zone
流量被路由到相同的节点池

下图为服务拓扑功能的一个例子。service-ud1 添加了注解 openyurt.io/topologyKeys: openyurt.io/nodepool , 当 pod6 访问 service-ud1 的时候,由于 pod6 位于 edge node2,也就是位于杭州节点池,因此其流量只会发往杭州节点池的 pod1pod2上,而不会跨节点池,所以 pod3pod4 收不到。从而实现了同一个节点池中的流量闭环。

service-topology

前提条件

  1. Kubernetes v1.18或以上版本,因为需要支持 EndpointSlice 资源。
  2. 集群中部署了 Yurt-app-manager。

使用方法演示

确保 Kubernetes 版本大于1.18。

  1. $ kubectl get node
  2. NAME STATUS ROLES AGE VERSION
  3. kind-control-plane Ready master 6m21s v1.18.19
  4. kind-worker Ready <none> 5m42s v1.18.19
  5. kind-worker2 Ready <none> 5m42s v1.18.19

确保集群中部署了 Yurt-app-manager。

  1. $ kubectl get pod -n kube-system
  2. NAME READY STATUS RESTARTS AGE
  3. coredns-66bff467f8-jxvnw 1/1 Running 0 7m28s
  4. coredns-66bff467f8-lk8v5 1/1 Running 0 7m28s
  5. etcd-kind-control-plane 1/1 Running 0 7m39s
  6. kindnet-5dpxt 1/1 Running 0 7m28s
  7. kindnet-ckz88 1/1 Running 0 7m10s
  8. kindnet-sqxs7 1/1 Running 0 7m10s
  9. kube-apiserver-kind-control-plane 1/1 Running 0 7m39s
  10. kube-controller-manager-kind-control-plane 1/1 Running 0 5m38s
  11. kube-proxy-ddgjt 1/1 Running 0 7m28s
  12. kube-proxy-j25kr 1/1 Running 0 7m10s
  13. kube-proxy-jt9cw 1/1 Running 0 7m10s
  14. kube-scheduler-kind-control-plane 1/1 Running 0 7m39s
  15. yurt-app-manager-699ffdcb78-8m9sf 1/1 Running 0 37s
  16. yurt-app-manager-699ffdcb78-fdqmq 1/1 Running 0 37s
  17. yurt-controller-manager-6c95788bf-jrqts 1/1 Running 0 6m17s
  18. yurt-hub-kind-control-plane 1/1 Running 0 3m36s
  19. yurt-hub-kind-worker 1/1 Running 0 4m50s
  20. yurt-hub-kind-worker2 1/1 Running 0 4m50s

配置 kube-proxy

开启 kube-proxyEndpointSliceProxying [特性门控](特性门控 | Kubernetes),并配置其连接 Yurthub

  1. $ kubectl edit cm -n kube-system kube-proxy
  2. apiVersion: v1
  3. data:
  4. config.conf: |-
  5. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  6. bindAddress: 0.0.0.0
  7. featureGates: # 1. enable EndpointSliceProxying feature gate.
  8. EndpointSliceProxying: true
  9. clientConnection:
  10. acceptContentTypes: ""
  11. burst: 0
  12. contentType: ""
  13. #kubeconfig: /var/lib/kube-proxy/kubeconfig.conf # 2. comment this line.
  14. qps: 0
  15. clusterCIDR: 10.244.0.0/16
  16. configSyncPeriod: 0s

重启 kube-proxy

  1. $ kubectl delete pod --selector k8s-app=kube-proxy -n kube-system
  2. pod "kube-proxy-cbsmj" deleted
  3. pod "kube-proxy-cqwcs" deleted
  4. pod "kube-proxy-m9dgk" deleted

创建节点池

  • 创建用于测试的节点池。
  1. $ cat << EOF | kubectl apply -f -
  2. apiVersion: apps.openyurt.io/v1alpha1
  3. kind: NodePool
  4. metadata:
  5. name: beijing
  6. spec:
  7. type: Cloud
  8. ---
  9. apiVersion: apps.openyurt.io/v1alpha1
  10. kind: NodePool
  11. metadata:
  12. name: hangzhou
  13. spec:
  14. type: Edge
  15. annotations:
  16. apps.openyurt.io/example: test-hangzhou
  17. labels:
  18. apps.openyurt.io/example: test-hangzhou
  19. ---
  20. apiVersion: apps.openyurt.io/v1alpha1
  21. kind: NodePool
  22. metadata:
  23. name: shanghai
  24. spec:
  25. type: Edge
  26. annotations:
  27. apps.openyurt.io/example: test-shanghai
  28. labels:
  29. apps.openyurt.io/example: test-shanghai
  30. EOF
  • 将主节点 kind-control-plane 加入到北京节点池,工作节点 kind-worker 加入到杭州节点池, kind-worker2 加入到上海节点池。
  1. $ kubectl label node kind-control-plane apps.openyurt.io/desired-nodepool=beijing
  2. node/kind-control-plane labeled
  3. $ kubectl label node kind-worker apps.openyurt.io/desired-nodepool=hangzhou
  4. node/kind-worker labeled
  5. $ kubectl label node kind-worker2 apps.openyurt.io/desired-nodepool=shanghai
  6. node/kind-worker2 labeled
  • 查看节点池信息。
  1. $ kubectl get np
  2. NAME TYPE READYNODES NOTREADYNODES AGE
  3. beijing Cloud 1 0 63s
  4. hangzhou Edge 1 0 63s
  5. shanghai Edge 1 0 63s

创建 UnitedDeployment

  • 创建 united-deployment1 用于测试。为了便于测试,我们使用 serve_hostname 镜像,当访问 9376 端口时,容器会返回它自己的主机名。
  1. $ cat << EOF | kubectl apply -f -
  2. apiVersion: apps.openyurt.io/v1alpha1
  3. kind: UnitedDeployment
  4. metadata:
  5. labels:
  6. controller-tools.k8s.io: "1.0"
  7. name: united-deployment1
  8. spec:
  9. selector:
  10. matchLabels:
  11. app: united-deployment1
  12. workloadTemplate:
  13. deploymentTemplate:
  14. metadata:
  15. labels:
  16. app: united-deployment1
  17. spec:
  18. template:
  19. metadata:
  20. labels:
  21. app: united-deployment1
  22. spec:
  23. containers:
  24. - name: hostname
  25. image: mirrorgooglecontainers/serve_hostname
  26. ports:
  27. - containerPort: 9376
  28. protocol: TCP
  29. topology:
  30. pools:
  31. - name: hangzhou
  32. nodeSelectorTerm:
  33. matchExpressions:
  34. - key: apps.openyurt.io/nodepool
  35. operator: In
  36. values:
  37. - hangzhou
  38. replicas: 2
  39. - name: shanghai
  40. nodeSelectorTerm:
  41. matchExpressions:
  42. - key: apps.openyurt.io/nodepool
  43. operator: In
  44. values:
  45. - shanghai
  46. replicas: 2
  47. revisionHistoryLimit: 5
  48. EOF
  • 创建 united-deployment2 用于测试。这里我们使用nginx 镜像,用来访问由 united-deployment1 创建的 hostname pod。
  1. $ cat << EOF | kubectl apply -f -
  2. apiVersion: apps.openyurt.io/v1alpha1
  3. kind: UnitedDeployment
  4. metadata:
  5. labels:
  6. controller-tools.k8s.io: "1.0"
  7. name: united-deployment2
  8. spec:
  9. selector:
  10. matchLabels:
  11. app: united-deployment2
  12. workloadTemplate:
  13. deploymentTemplate:
  14. metadata:
  15. labels:
  16. app: united-deployment2
  17. spec:
  18. template:
  19. metadata:
  20. labels:
  21. app: united-deployment2
  22. spec:
  23. containers:
  24. - name: nginx
  25. image: nginx:1.19.3
  26. ports:
  27. - containerPort: 80
  28. protocol: TCP
  29. topology:
  30. pools:
  31. - name: hangzhou
  32. nodeSelectorTerm:
  33. matchExpressions:
  34. - key: apps.openyurt.io/nodepool
  35. operator: In
  36. values:
  37. - hangzhou
  38. replicas: 2
  39. - name: shanghai
  40. nodeSelectorTerm:
  41. matchExpressions:
  42. - key: apps.openyurt.io/nodepool
  43. operator: In
  44. values:
  45. - shanghai
  46. replicas: 2
  47. revisionHistoryLimit: 5
  48. EOF
  • 查看由上述 unitedDeployment 创建出来的 pod 信息。
  1. $ kubectl get pod -l "app in (united-deployment1,united-deployment2)" -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  3. united-deployment1-hangzhou-fv6th-66ff6fd958-f2694 1/1 Running 0 18m 10.244.2.3 kind-worker <none> <none>
  4. united-deployment1-hangzhou-fv6th-66ff6fd958-twf95 1/1 Running 0 18m 10.244.2.2 kind-worker <none> <none>
  5. united-deployment1-shanghai-5p8zk-84bdd476b6-hr6xt 1/1 Running 0 18m 10.244.1.3 kind-worker2 <none> <none>
  6. united-deployment1-shanghai-5p8zk-84bdd476b6-wjck2 1/1 Running 0 18m 10.244.1.2 kind-worker2 <none> <none>
  7. united-deployment2-hangzhou-lpkzg-6d958b67b6-gf847 1/1 Running 0 15m 10.244.2.4 kind-worker <none> <none>
  8. united-deployment2-hangzhou-lpkzg-6d958b67b6-lbnwl 1/1 Running 0 15m 10.244.2.5 kind-worker <none> <none>
  9. united-deployment2-shanghai-tqgd4-57f7555494-9jvjb 1/1 Running 0 15m 10.244.1.5 kind-worker2 <none> <none>
  10. united-deployment2-shanghai-tqgd4-57f7555494-rn8n8 1/1 Running 0 15m 10.244.1.4 kind-worker2 <none> <none>

创建含有 openyurt.io/topologyKeys 注解的服务

  1. $ cat << EOF | kubectl apply -f -
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: svc-ud1
  6. annotations:
  7. openyurt.io/topologyKeys: openyurt.io/nodepool
  8. spec:
  9. selector:
  10. app: united-deployment1
  11. type: ClusterIP
  12. ports:
  13. - port: 80
  14. targetPort: 9376
  15. EOF

创建不含 openyurt.io/topologyKeys 注解的服务

  1. $ cat << EOF | kubectl apply -f -
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: svc-ud1-without-topology
  6. spec:
  7. selector:
  8. app: united-deployment1
  9. type: ClusterIP
  10. ports:
  11. - port: 80
  12. targetPort: 9376
  13. EOF

测试服务拓扑功能

通过使用上海节点池中的 pod 访问上述创建的两个服务来测试服务拓扑功能。当访问含有 openyurt.io/topologyKeys 注解的服务时,流量会被路由到位于上海节点池中的节点上。

为了进行比较,我们首先测试没有openyurt.io/topologyKeys注解的服务。结果如下,可以看到它的流量既可以被杭州节点池接收,也能被上海节点池接收,并不受节点池的限制。

  1. $ kubectl exec -it united-deployment2-shanghai-tqgd4-57f7555494-9jvjb bash
  2. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1-without-topology:80
  3. united-deployment1-hangzhou-fv6th-66ff6fd958-twf95
  4. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1-without-topology:80
  5. united-deployment1-shanghai-5p8zk-84bdd476b6-hr6xt
  6. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1-without-topology:80
  7. united-deployment1-hangzhou-fv6th-66ff6fd958-twf95
  8. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1-without-topology:80
  9. united-deployment1-hangzhou-fv6th-66ff6fd958-f2694

然后我们测试含有openyurt.io/topologyKeys注解的服务。结果如下,可以看到其流量只能路由到上海节点池中的节点。

  1. $ kubectl exec -it united-deployment2-shanghai-tqgd4-57f7555494-9jvjb bash
  2. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1:80
  3. united-deployment1-shanghai-5p8zk-84bdd476b6-wjck2
  4. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1:80
  5. united-deployment1-shanghai-5p8zk-84bdd476b6-hr6xt
  6. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1:80
  7. united-deployment1-shanghai-5p8zk-84bdd476b6-wjck2
  8. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1:80
  9. united-deployment1-shanghai-5p8zk-84bdd476b6-hr6xt