Service Topology
Service Topology enables a service to route traffic based on the Node topology of the cluster. For example, a service can specify that traffic be preferentially routed to endpoints that are on the same Node as the client, or in the same available NodePool.
The following picture shows the general function of the service topology.

To use service topology, the EndpointSliceProxying feature gate must be enabled, and kube-proxy needs to be configured to connect to Yurthub instead of the API server.
You can set the topologyKeys values of a service to direct traffic as follows. If topologyKeys is not specified or empty, no topology constraints will be applied.
| annotation Key | annotation Value | explain |
|---|---|---|
| openyurt.io/topologyKeys | kubernetes.io/hostname | Only to endpoints on the same node. |
| openyurt.io/topologyKeys | kubernetes.io/zone or openyurt.io/nodepool | Only to endpoints on the same nodepool. |
Prerequisites
- Kubernetes v1.18 or above, since EndpointSlice resource needs to be supported.
- Yurt-app-manager is deployed in the cluster.
How to use
Ensure that kubernetes version is v1.18+.
$ kubectl get nodeNAME STATUS ROLES AGE VERSIONkind-control-plane Ready master 6m21s v1.18.19kind-worker Ready <none> 5m42s v1.18.19kind-worker2 Ready <none> 5m42s v1.18.19
Ensure that yurt-app-manager is deployed in the cluster.
$ kubectl get pod -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-66bff467f8-jxvnw 1/1 Running 0 7m28scoredns-66bff467f8-lk8v5 1/1 Running 0 7m28setcd-kind-control-plane 1/1 Running 0 7m39skindnet-5dpxt 1/1 Running 0 7m28skindnet-ckz88 1/1 Running 0 7m10skindnet-sqxs7 1/1 Running 0 7m10skube-apiserver-kind-control-plane 1/1 Running 0 7m39skube-controller-manager-kind-control-plane 1/1 Running 0 5m38skube-proxy-ddgjt 1/1 Running 0 7m28skube-proxy-j25kr 1/1 Running 0 7m10skube-proxy-jt9cw 1/1 Running 0 7m10skube-scheduler-kind-control-plane 1/1 Running 0 7m39syurt-app-manager-699ffdcb78-8m9sf 1/1 Running 0 37syurt-app-manager-699ffdcb78-fdqmq 1/1 Running 0 37syurt-controller-manager-6c95788bf-jrqts 1/1 Running 0 6m17syurt-hub-kind-control-plane 1/1 Running 0 3m36syurt-hub-kind-worker 1/1 Running 0 4m50syurt-hub-kind-worker2 1/1 Running 0 4m50s
Configure kube-proxy
To use service topology, the EndpointSliceProxying feature gate must be enabled, and kube-proxy needs to be configured to connect to Yurthub instead of the API server.
$ kubectl edit cm -n kube-system kube-proxyapiVersion: v1data:config.conf: |-apiVersion: kubeproxy.config.k8s.io/v1alpha1bindAddress: 0.0.0.0featureGates: # 1. enable EndpointSliceProxying feature gate.EndpointSliceProxying: trueclientConnection:acceptContentTypes: ""burst: 0contentType: ""#kubeconfig: /var/lib/kube-proxy/kubeconfig.conf # 2. comment this line.qps: 0clusterCIDR: 10.244.0.0/16configSyncPeriod: 0s
Restart kube-proxy.
$ kubectl delete pod --selector k8s-app=kube-proxy -n kube-systempod "kube-proxy-cbsmj" deletedpod "kube-proxy-cqwcs" deletedpod "kube-proxy-m9dgk" deleted
Create NodePools
- Create test nodepools.
$ cat << EOF | kubectl apply -f -apiVersion: apps.openyurt.io/v1alpha1kind: NodePoolmetadata:name: beijingspec:type: Cloud---apiVersion: apps.openyurt.io/v1alpha1kind: NodePoolmetadata:name: hangzhouspec:type: Edgeannotations:apps.openyurt.io/example: test-hangzhoulabels:apps.openyurt.io/example: test-hangzhou---apiVersion: apps.openyurt.io/v1alpha1kind: NodePoolmetadata:name: shanghaispec:type: Edgeannotations:apps.openyurt.io/example: test-shanghailabels:apps.openyurt.io/example: test-shanghaiEOF
- Add nodes to the nodepool.
$ kubectl label node kind-control-plane apps.openyurt.io/desired-nodepool=beijingnode/kind-control-plane labeled$ kubectl label node kind-worker apps.openyurt.io/desired-nodepool=hangzhounode/kind-worker labeled$ kubectl label node kind-worker2 apps.openyurt.io/desired-nodepool=shanghainode/kind-worker2 labeled
- Get NodePool.
$ kubectl get npNAME TYPE READYNODES NOTREADYNODES AGEbeijing Cloud 1 0 63shangzhou Edge 1 0 63sshanghai Edge 1 0 63s
Create UnitedDeployment
- Create test united-deployment1. To facilitate testing, we use a
serve_hostnameimage. Each time port 9376 is accessed, the hostname container returns its own hostname.
$ cat << EOF | kubectl apply -f -apiVersion: apps.openyurt.io/v1alpha1kind: UnitedDeploymentmetadata:labels:controller-tools.k8s.io: "1.0"name: united-deployment1spec:selector:matchLabels:app: united-deployment1workloadTemplate:deploymentTemplate:metadata:labels:app: united-deployment1spec:template:metadata:labels:app: united-deployment1spec:containers:- name: hostnameimage: mirrorgooglecontainers/serve_hostnameports:- containerPort: 9376protocol: TCPtopology:pools:- name: hangzhounodeSelectorTerm:matchExpressions:- key: apps.openyurt.io/nodepooloperator: Invalues:- hangzhoureplicas: 2- name: shanghainodeSelectorTerm:matchExpressions:- key: apps.openyurt.io/nodepooloperator: Invalues:- shanghaireplicas: 2revisionHistoryLimit: 5EOF
- Create test united-deployment2. Here we use
nginximage, in order to access thehostnamepod that created by united-deployment1 above.
$ cat << EOF | kubectl apply -f -apiVersion: apps.openyurt.io/v1alpha1kind: UnitedDeploymentmetadata:labels:controller-tools.k8s.io: "1.0"name: united-deployment2spec:selector:matchLabels:app: united-deployment2workloadTemplate:deploymentTemplate:metadata:labels:app: united-deployment2spec:template:metadata:labels:app: united-deployment2spec:containers:- name: nginximage: nginx:1.19.3ports:- containerPort: 80protocol: TCPtopology:pools:- name: hangzhounodeSelectorTerm:matchExpressions:- key: apps.openyurt.io/nodepooloperator: Invalues:- hangzhoureplicas: 2- name: shanghainodeSelectorTerm:matchExpressions:- key: apps.openyurt.io/nodepooloperator: Invalues:- shanghaireplicas: 2revisionHistoryLimit: 5EOF
- Get pods that created by the unitedDeployment.
$ kubectl get pod -l "app in (united-deployment1,united-deployment2)" -owideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESunited-deployment1-hangzhou-fv6th-66ff6fd958-f2694 1/1 Running 0 18m 10.244.2.3 kind-worker <none> <none>united-deployment1-hangzhou-fv6th-66ff6fd958-twf95 1/1 Running 0 18m 10.244.2.2 kind-worker <none> <none>united-deployment1-shanghai-5p8zk-84bdd476b6-hr6xt 1/1 Running 0 18m 10.244.1.3 kind-worker2 <none> <none>united-deployment1-shanghai-5p8zk-84bdd476b6-wjck2 1/1 Running 0 18m 10.244.1.2 kind-worker2 <none> <none>united-deployment2-hangzhou-lpkzg-6d958b67b6-gf847 1/1 Running 0 15m 10.244.2.4 kind-worker <none> <none>united-deployment2-hangzhou-lpkzg-6d958b67b6-lbnwl 1/1 Running 0 15m 10.244.2.5 kind-worker <none> <none>united-deployment2-shanghai-tqgd4-57f7555494-9jvjb 1/1 Running 0 15m 10.244.1.5 kind-worker2 <none> <none>united-deployment2-shanghai-tqgd4-57f7555494-rn8n8 1/1 Running 0 15m 10.244.1.4 kind-worker2 <none> <none>
Create Service with TopologyKeys
$ cat << EOF | kubectl apply -f -apiVersion: v1kind: Servicemetadata:name: svc-ud1annotations:openyurt.io/topologyKeys: openyurt.io/nodepoolspec:selector:app: united-deployment1type: ClusterIPports:- port: 80targetPort: 9376EOF
Create Service without TopologyKeys
$ cat << EOF | kubectl apply -f -apiVersion: v1kind: Servicemetadata:name: svc-ud1-without-topologyspec:selector:app: united-deployment1type: ClusterIPports:- port: 80targetPort: 9376EOF
Test Service Topology
We use the nginx pod in the shanghai nodepool to test service topology. Therefore, its traffic can only be routed to the nodes that in shanghai nodepool when it accesses a service with the openyurt.io/topologyKeys: openyurt.io/nodepool annotation.
For comparison, we first test the service without serviceTopology annotation. As we can see, its traffic can be routed to any nodes.
$ kubectl exec -it united-deployment2-shanghai-tqgd4-57f7555494-9jvjb bashroot@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1-without-topology:80united-deployment1-hangzhou-fv6th-66ff6fd958-twf95root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/#root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1-without-topology:80united-deployment1-shanghai-5p8zk-84bdd476b6-hr6xtroot@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/#root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1-without-topology:80united-deployment1-hangzhou-fv6th-66ff6fd958-twf95root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/#root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1-without-topology:80united-deployment1-hangzhou-fv6th-66ff6fd958-f2694
Then we test the service with serviceTopology annotation. As expected, its traffic can only be routed to the nodes in shanghai nodepool.
$ kubectl exec -it united-deployment2-shanghai-tqgd4-57f7555494-9jvjb bashroot@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1:80united-deployment1-shanghai-5p8zk-84bdd476b6-wjck2root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/#root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1:80united-deployment1-shanghai-5p8zk-84bdd476b6-hr6xtroot@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/#root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1:80united-deployment1-shanghai-5p8zk-84bdd476b6-wjck2root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/#root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1:80united-deployment1-shanghai-5p8zk-84bdd476b6-hr6xt
