Service Topology

Service Topology enables a service to route traffic based on the Node topology of the cluster. For example, a service can specify that traffic be preferentially routed to endpoints that are on the same Node as the client, or in the same available NodePool.

The following picture shows the general function of the service topology.

service-topology

To use service topology, the EndpointSliceProxying feature gate must be enabled, and kube-proxy needs to be configured to connect to Yurthub instead of the API server.

You can set the topologyKeys values of a service to direct traffic as follows. If topologyKeys is not specified or empty, no topology constraints will be applied.

annotation Keyannotation Valueexplain
openyurt.io/topologyKeyskubernetes.io/hostnameOnly to endpoints on the same node.
openyurt.io/topologyKeyskubernetes.io/zone
or
openyurt.io/nodepool
Only to endpoints on the same nodepool.

Prerequisites

  1. Kubernetes v1.18 or above, since EndpointSlice resource needs to be supported.
  2. Yurt-app-manager is deployed in the cluster.

How to use

Ensure that kubernetes version is v1.18+.

  1. $ kubectl get node
  2. NAME STATUS ROLES AGE VERSION
  3. kind-control-plane Ready master 6m21s v1.18.19
  4. kind-worker Ready <none> 5m42s v1.18.19
  5. kind-worker2 Ready <none> 5m42s v1.18.19

Ensure that yurt-app-manager is deployed in the cluster.

  1. $ kubectl get pod -n kube-system
  2. NAME READY STATUS RESTARTS AGE
  3. coredns-66bff467f8-jxvnw 1/1 Running 0 7m28s
  4. coredns-66bff467f8-lk8v5 1/1 Running 0 7m28s
  5. etcd-kind-control-plane 1/1 Running 0 7m39s
  6. kindnet-5dpxt 1/1 Running 0 7m28s
  7. kindnet-ckz88 1/1 Running 0 7m10s
  8. kindnet-sqxs7 1/1 Running 0 7m10s
  9. kube-apiserver-kind-control-plane 1/1 Running 0 7m39s
  10. kube-controller-manager-kind-control-plane 1/1 Running 0 5m38s
  11. kube-proxy-ddgjt 1/1 Running 0 7m28s
  12. kube-proxy-j25kr 1/1 Running 0 7m10s
  13. kube-proxy-jt9cw 1/1 Running 0 7m10s
  14. kube-scheduler-kind-control-plane 1/1 Running 0 7m39s
  15. yurt-app-manager-699ffdcb78-8m9sf 1/1 Running 0 37s
  16. yurt-app-manager-699ffdcb78-fdqmq 1/1 Running 0 37s
  17. yurt-controller-manager-6c95788bf-jrqts 1/1 Running 0 6m17s
  18. yurt-hub-kind-control-plane 1/1 Running 0 3m36s
  19. yurt-hub-kind-worker 1/1 Running 0 4m50s
  20. yurt-hub-kind-worker2 1/1 Running 0 4m50s

Configure kube-proxy

To use service topology, the EndpointSliceProxying feature gate must be enabled, and kube-proxy needs to be configured to connect to Yurthub instead of the API server.

  1. $ kubectl edit cm -n kube-system kube-proxy
  2. apiVersion: v1
  3. data:
  4. config.conf: |-
  5. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  6. bindAddress: 0.0.0.0
  7. featureGates: # 1. enable EndpointSliceProxying feature gate.
  8. EndpointSliceProxying: true
  9. clientConnection:
  10. acceptContentTypes: ""
  11. burst: 0
  12. contentType: ""
  13. #kubeconfig: /var/lib/kube-proxy/kubeconfig.conf # 2. comment this line.
  14. qps: 0
  15. clusterCIDR: 10.244.0.0/16
  16. configSyncPeriod: 0s

Restart kube-proxy.

  1. $ kubectl delete pod --selector k8s-app=kube-proxy -n kube-system
  2. pod "kube-proxy-cbsmj" deleted
  3. pod "kube-proxy-cqwcs" deleted
  4. pod "kube-proxy-m9dgk" deleted

Create NodePools

  • Create test nodepools.
  1. $ cat << EOF | kubectl apply -f -
  2. apiVersion: apps.openyurt.io/v1alpha1
  3. kind: NodePool
  4. metadata:
  5. name: beijing
  6. spec:
  7. type: Cloud
  8. ---
  9. apiVersion: apps.openyurt.io/v1alpha1
  10. kind: NodePool
  11. metadata:
  12. name: hangzhou
  13. spec:
  14. type: Edge
  15. annotations:
  16. apps.openyurt.io/example: test-hangzhou
  17. labels:
  18. apps.openyurt.io/example: test-hangzhou
  19. ---
  20. apiVersion: apps.openyurt.io/v1alpha1
  21. kind: NodePool
  22. metadata:
  23. name: shanghai
  24. spec:
  25. type: Edge
  26. annotations:
  27. apps.openyurt.io/example: test-shanghai
  28. labels:
  29. apps.openyurt.io/example: test-shanghai
  30. EOF
  • Add nodes to the nodepool.
  1. $ kubectl label node kind-control-plane apps.openyurt.io/desired-nodepool=beijing
  2. node/kind-control-plane labeled
  3. $ kubectl label node kind-worker apps.openyurt.io/desired-nodepool=hangzhou
  4. node/kind-worker labeled
  5. $ kubectl label node kind-worker2 apps.openyurt.io/desired-nodepool=shanghai
  6. node/kind-worker2 labeled
  • Get NodePool.
  1. $ kubectl get np
  2. NAME TYPE READYNODES NOTREADYNODES AGE
  3. beijing Cloud 1 0 63s
  4. hangzhou Edge 1 0 63s
  5. shanghai Edge 1 0 63s

Create UnitedDeployment

  • Create test united-deployment1. To facilitate testing, we use a serve_hostname image. Each time port 9376 is accessed, the hostname container returns its own hostname.
  1. $ cat << EOF | kubectl apply -f -
  2. apiVersion: apps.openyurt.io/v1alpha1
  3. kind: UnitedDeployment
  4. metadata:
  5. labels:
  6. controller-tools.k8s.io: "1.0"
  7. name: united-deployment1
  8. spec:
  9. selector:
  10. matchLabels:
  11. app: united-deployment1
  12. workloadTemplate:
  13. deploymentTemplate:
  14. metadata:
  15. labels:
  16. app: united-deployment1
  17. spec:
  18. template:
  19. metadata:
  20. labels:
  21. app: united-deployment1
  22. spec:
  23. containers:
  24. - name: hostname
  25. image: mirrorgooglecontainers/serve_hostname
  26. ports:
  27. - containerPort: 9376
  28. protocol: TCP
  29. topology:
  30. pools:
  31. - name: hangzhou
  32. nodeSelectorTerm:
  33. matchExpressions:
  34. - key: apps.openyurt.io/nodepool
  35. operator: In
  36. values:
  37. - hangzhou
  38. replicas: 2
  39. - name: shanghai
  40. nodeSelectorTerm:
  41. matchExpressions:
  42. - key: apps.openyurt.io/nodepool
  43. operator: In
  44. values:
  45. - shanghai
  46. replicas: 2
  47. revisionHistoryLimit: 5
  48. EOF
  • Create test united-deployment2. Here we use nginx image, in order to access the hostname pod that created by united-deployment1 above.
  1. $ cat << EOF | kubectl apply -f -
  2. apiVersion: apps.openyurt.io/v1alpha1
  3. kind: UnitedDeployment
  4. metadata:
  5. labels:
  6. controller-tools.k8s.io: "1.0"
  7. name: united-deployment2
  8. spec:
  9. selector:
  10. matchLabels:
  11. app: united-deployment2
  12. workloadTemplate:
  13. deploymentTemplate:
  14. metadata:
  15. labels:
  16. app: united-deployment2
  17. spec:
  18. template:
  19. metadata:
  20. labels:
  21. app: united-deployment2
  22. spec:
  23. containers:
  24. - name: nginx
  25. image: nginx:1.19.3
  26. ports:
  27. - containerPort: 80
  28. protocol: TCP
  29. topology:
  30. pools:
  31. - name: hangzhou
  32. nodeSelectorTerm:
  33. matchExpressions:
  34. - key: apps.openyurt.io/nodepool
  35. operator: In
  36. values:
  37. - hangzhou
  38. replicas: 2
  39. - name: shanghai
  40. nodeSelectorTerm:
  41. matchExpressions:
  42. - key: apps.openyurt.io/nodepool
  43. operator: In
  44. values:
  45. - shanghai
  46. replicas: 2
  47. revisionHistoryLimit: 5
  48. EOF
  • Get pods that created by the unitedDeployment.
  1. $ kubectl get pod -l "app in (united-deployment1,united-deployment2)" -owide
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  3. united-deployment1-hangzhou-fv6th-66ff6fd958-f2694 1/1 Running 0 18m 10.244.2.3 kind-worker <none> <none>
  4. united-deployment1-hangzhou-fv6th-66ff6fd958-twf95 1/1 Running 0 18m 10.244.2.2 kind-worker <none> <none>
  5. united-deployment1-shanghai-5p8zk-84bdd476b6-hr6xt 1/1 Running 0 18m 10.244.1.3 kind-worker2 <none> <none>
  6. united-deployment1-shanghai-5p8zk-84bdd476b6-wjck2 1/1 Running 0 18m 10.244.1.2 kind-worker2 <none> <none>
  7. united-deployment2-hangzhou-lpkzg-6d958b67b6-gf847 1/1 Running 0 15m 10.244.2.4 kind-worker <none> <none>
  8. united-deployment2-hangzhou-lpkzg-6d958b67b6-lbnwl 1/1 Running 0 15m 10.244.2.5 kind-worker <none> <none>
  9. united-deployment2-shanghai-tqgd4-57f7555494-9jvjb 1/1 Running 0 15m 10.244.1.5 kind-worker2 <none> <none>
  10. united-deployment2-shanghai-tqgd4-57f7555494-rn8n8 1/1 Running 0 15m 10.244.1.4 kind-worker2 <none> <none>

Create Service with TopologyKeys

  1. $ cat << EOF | kubectl apply -f -
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: svc-ud1
  6. annotations:
  7. openyurt.io/topologyKeys: openyurt.io/nodepool
  8. spec:
  9. selector:
  10. app: united-deployment1
  11. type: ClusterIP
  12. ports:
  13. - port: 80
  14. targetPort: 9376
  15. EOF

Create Service without TopologyKeys

  1. $ cat << EOF | kubectl apply -f -
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: svc-ud1-without-topology
  6. spec:
  7. selector:
  8. app: united-deployment1
  9. type: ClusterIP
  10. ports:
  11. - port: 80
  12. targetPort: 9376
  13. EOF

Test Service Topology

We use the nginx pod in the shanghai nodepool to test service topology. Therefore, its traffic can only be routed to the nodes that in shanghai nodepool when it accesses a service with the openyurt.io/topologyKeys: openyurt.io/nodepool annotation.

For comparison, we first test the service without serviceTopology annotation. As we can see, its traffic can be routed to any nodes.

  1. $ kubectl exec -it united-deployment2-shanghai-tqgd4-57f7555494-9jvjb bash
  2. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1-without-topology:80
  3. united-deployment1-hangzhou-fv6th-66ff6fd958-twf95
  4. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/#
  5. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1-without-topology:80
  6. united-deployment1-shanghai-5p8zk-84bdd476b6-hr6xt
  7. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/#
  8. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1-without-topology:80
  9. united-deployment1-hangzhou-fv6th-66ff6fd958-twf95
  10. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/#
  11. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1-without-topology:80
  12. united-deployment1-hangzhou-fv6th-66ff6fd958-f2694

Then we test the service with serviceTopology annotation. As expected, its traffic can only be routed to the nodes in shanghai nodepool.

  1. $ kubectl exec -it united-deployment2-shanghai-tqgd4-57f7555494-9jvjb bash
  2. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1:80
  3. united-deployment1-shanghai-5p8zk-84bdd476b6-wjck2
  4. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/#
  5. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1:80
  6. united-deployment1-shanghai-5p8zk-84bdd476b6-hr6xt
  7. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/#
  8. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1:80
  9. united-deployment1-shanghai-5p8zk-84bdd476b6-wjck2
  10. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/#
  11. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1:80
  12. united-deployment1-shanghai-5p8zk-84bdd476b6-hr6xt