Pod Topology Spread Constraints

FEATURE STATE: Kubernetes v1.16alpha This feature is currently in a alpha state, meaning:

  • The version names contain alpha (e.g. v1alpha1).
  • Might be buggy. Enabling the feature may expose bugs. Disabled by default.
  • Support for feature may be dropped at any time without notice.
  • The API may change in incompatible ways in a later software release without notice.
  • Recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support.

You can use topology spread constraints to control how PodsThe smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster. are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization.

Prerequisites

Enable Feature Gate

Ensure the EvenPodsSpread feature gate is enabled (it is disabled by defaultin 1.16). See Feature Gatesfor an explanation of enabling feature gates. The EvenPodsSpread feature gate must be enabled for theAPI ServerControl plane component that serves the Kubernetes API.andschedulerComponent on the master that watches newly created pods that have no node assigned, and selects a node for them to run on..

Node Labels

Topology spread constraints rely on node labels to identify the topology domain(s) that each Node is in. For example, a Node might have labels: node=node1,zone=us-east-1a,region=us-east-1

Suppose you have a 4-node cluster with the following labels:

  1. NAME STATUS ROLES AGE VERSION LABELS
  2. node1 Ready <none> 4m26s v1.16.0 node=node1,zone=zoneA
  3. node2 Ready <none> 3m58s v1.16.0 node=node2,zone=zoneA
  4. node3 Ready <none> 3m17s v1.16.0 node=node3,zone=zoneB
  5. node4 Ready <none> 2m43s v1.16.0 node=node4,zone=zoneB

Then the cluster is logically viewed as below:

  1. +---------------+---------------+
  2. | zoneA | zoneB |
  3. +-------+-------+-------+-------+
  4. | node1 | node2 | node3 | node4 |
  5. +-------+-------+-------+-------+

Instead of manually applying labels, you can also reuse the well-known labels that are created and populated automatically on most clusters.

Spread Constraints for Pods

API

The field pod.spec.topologySpreadConstraints is introduced in 1.16 as below:

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: mypod
  5. spec:
  6. topologySpreadConstraints:
  7. - maxSkew: <integer>
  8. topologyKey: <string>
  9. whenUnsatisfiable: <string>
  10. labelSelector: <object>

You can define one or multiple topologySpreadConstraint to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster. The fields are:

  • maxSkew describes the degree to which Pods may be unevenly distributed. It’s the maximum permitted difference between the number of matching Pods in any two topology domains of a given topology type. It must be greater than zero.
  • topologyKey is the key of node labels. If two Nodes are labelled with this key and have identical values for that label, the scheduler treats both Nodes as being in the same topology. The scheduler tries to place a balanced number of Pods into each topology domain.
  • whenUnsatisfiable indicates how to deal with a Pod if it doesn’t satisfy the spread constraint:
    • DoNotSchedule (default) tells the scheduler not to schedule it.
    • ScheduleAnyway tells the scheduler to still schedule it while prioritizing nodes that minimize the skew.
  • labelSelector is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain. See Label Selectors for more details.

You can read more about this field by running kubectl explain Pod.spec.topologySpreadConstraints.

Example: One TopologySpreadConstraint

Suppose you have a 4-node cluster where 3 Pods labeled foo:bar are located in node1, node2 and node3 respectively (P represents Pod):

  1. +---------------+---------------+
  2. | zoneA | zoneB |
  3. +-------+-------+-------+-------+
  4. | node1 | node2 | node3 | node4 |
  5. +-------+-------+-------+-------+
  6. | P | P | P | |
  7. +-------+-------+-------+-------+

If we want an incoming Pod to be evenly spread with existing Pods across zones, the spec can be given as:

pods/topology-spread-constraints/one-constraint.yamlPod Topology Spread Constraints (EN) - 图1
  1. kind: Pod
  2. apiVersion: v1
  3. metadata:
  4. name: mypod
  5. labels:
  6. foo: bar
  7. spec:
  8. topologySpreadConstraints:
  9. - maxSkew: 1
  10. topologyKey: zone
  11. whenUnsatisfiable: DoNotSchedule
  12. labelSelector:
  13. matchLabels:
  14. foo: bar
  15. containers:
  16. - name: pause
  17. image: k8s.gcr.io/pause:3.1

topologyKey: zone implies the even distribution will only be applied to the nodes which have label pair “zone:” present. whenUnsatisfiable: DoNotSchedule tells the scheduler to let it stay pending if the incoming Pod can’t satisfy the constraint.

If the scheduler placed this incoming Pod into “zoneA”, the Pods distribution would become [3, 1], hence the actual skew is 2 (3 - 1) - which violates maxSkew: 1. In this example, the incoming Pod can only be placed onto “zoneB”:

  1. +---------------+---------------+ +---------------+---------------+
  2. | zoneA | zoneB | | zoneA | zoneB |
  3. +-------+-------+-------+-------+ +-------+-------+-------+-------+
  4. | node1 | node2 | node3 | node4 | OR | node1 | node2 | node3 | node4 |
  5. +-------+-------+-------+-------+ +-------+-------+-------+-------+
  6. | P | P | P | P | | P | P | P P | |
  7. +-------+-------+-------+-------+ +-------+-------+-------+-------+

You can tweak the Pod spec to meet various kinds of requirements:

  • Change maxSkew to a bigger value like “2” so that the incoming Pod can be placed onto “zoneA” as well.
  • Change topologyKey to “node” so as to distribute the Pods evenly across nodes instead of zones. In the above example, if maxSkew remains “1”, the incoming Pod can only be placed onto “node4”.
  • Change whenUnsatisfiable: DoNotSchedule to whenUnsatisfiable: ScheduleAnyway to ensure the incoming Pod to be always schedulable (suppose other scheduling APIs are satisfied). However, it’s preferred to be placed onto the topology domain which has fewer matching Pods. (Be aware that this preferability is jointly normalized with other internal scheduling priorities like resource usage ratio, etc.)

Example: Multiple TopologySpreadConstraints

This builds upon the previous example. Suppose you have a 4-node cluster where 3 Pods labeled foo:bar are located in node1, node2 and node3 respectively (P represents Pod):

  1. +---------------+---------------+
  2. | zoneA | zoneB |
  3. +-------+-------+-------+-------+
  4. | node1 | node2 | node3 | node4 |
  5. +-------+-------+-------+-------+
  6. | P | P | P | |
  7. +-------+-------+-------+-------+

You can use 2 TopologySpreadConstraints to control the Pods spreading on both zone and node:

pods/topology-spread-constraints/two-constraints.yamlPod Topology Spread Constraints (EN) - 图2
  1. kind: Pod
  2. apiVersion: v1
  3. metadata:
  4. name: mypod
  5. labels:
  6. foo: bar
  7. spec:
  8. topologySpreadConstraints:
  9. - maxSkew: 1
  10. topologyKey: zone
  11. whenUnsatisfiable: DoNotSchedule
  12. labelSelector:
  13. matchLabels:
  14. foo: bar
  15. - maxSkew: 1
  16. topologyKey: node
  17. whenUnsatisfiable: DoNotSchedule
  18. labelSelector:
  19. matchLabels:
  20. foo: bar
  21. containers:
  22. - name: pause
  23. image: k8s.gcr.io/pause:3.1

In this case, to match the first constraint, the incoming Pod can only be placed onto “zoneB”; while in terms of the second constraint, the incoming Pod can only be placed onto “node4”. Then the results of 2 constraints are ANDed, so the only viable option is to place on “node4”.

Multiple constraints can lead to conflicts. Suppose you have a 3-node cluster across 2 zones:

  1. +---------------+-------+
  2. | zoneA | zoneB |
  3. +-------+-------+-------+
  4. | node1 | node2 | nod3 |
  5. +-------+-------+-------+
  6. | P P | P | P P |
  7. +-------+-------+-------+

If you apply “two-constraints.yaml” to this cluster, you will notice “mypod” stays in Pending state. This is because: to satisfy the first constraint, “mypod” can only be put to “zoneB”; while in terms of the second constraint, “mypod” can only put to “node2”. Then a joint result of “zoneB” and “node2” returns nothing.

To overcome this situation, you can either increase the maxSkew or modify one of the constraints to use whenUnsatisfiable: ScheduleAnyway.

Conventions

There are some implicit conventions worth noting here:

  • Only the Pods holding the same namespace as the incoming Pod can be matching candidates.

  • Nodes without topologySpreadConstraints[*].topologyKey present will be bypassed. It implies that:

    • the Pods located on those nodes do not impact maxSkew calculation - in the above example, suppose “node1” does not have label “zone”, then the 2 Pods will be disregarded, hence the incomingPod will be scheduled into “zoneA”.
    • the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a “node5” carrying label {zone-typo: zoneC} joins the cluster, it will be bypassed due to the absence of label key “zone”.
  • Be aware of what will happen if the incomingPod’s topologySpreadConstraints[].labelSelector doesn’t match its own labels. In the above example, if we remove the incoming Pod’s labels, it can still be placed onto “zoneB” since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - it’s still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workload’s topologySpreadConstraints[].labelSelector to match its own labels.

  • If the incoming Pod has spec.nodeSelector or spec.affinity.nodeAffinity defined, nodes not matching them will be bypassed.

Suppose you have a 5-node cluster ranging from zoneA to zoneC:

  1. +---------------+---------------+-------+
  2. | zoneA | zoneB | zoneC |
  3. +-------+-------+-------+-------+-------+
  4. | node1 | node2 | node3 | node4 | node5 |
  5. +-------+-------+-------+-------+-------+
  6. | P | P | P | | |
  7. +-------+-------+-------+-------+-------+

and you know that “zoneC” must be excluded. In this case, you can compose the yaml as below, so that “mypod” will be placed onto “zoneB” instead of “zoneC”. Similarly spec.nodeSelector is also respected.

pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yamlPod Topology Spread Constraints (EN) - 图3
  1. kind: Pod
  2. apiVersion: v1
  3. metadata:
  4. name: mypod
  5. labels:
  6. foo: bar
  7. spec:
  8. topologySpreadConstraints:
  9. - maxSkew: 1
  10. topologyKey: zone
  11. whenUnsatisfiable: DoNotSchedule
  12. labelSelector:
  13. matchLabels:
  14. foo: bar
  15. affinity:
  16. nodeAffinity:
  17. requiredDuringSchedulingIgnoredDuringExecution:
  18. nodeSelectorTerms:
  19. - matchExpressions:
  20. - key: zone
  21. operator: NotIn
  22. values:
  23. - zoneC
  24. containers:
  25. - name: pause
  26. image: k8s.gcr.io/pause:3.1

Comparison with PodAffinity/PodAntiAffinity

In Kubernetes, directives related to “Affinity” control how Pods arescheduled - more packed or more scattered.

  • For PodAffinity, you can try to pack any number of Pods into qualifyingtopology domain(s)
  • For PodAntiAffinity, only one Pod can be scheduled into asingle topology domain.

The “EvenPodsSpread” feature provides flexible options to distribute Pods evenly across differenttopology domains - to achieve high availability or cost-saving. This can also help on rolling updateworkloads and scaling out replicas smoothly.See Motivation for more details.

Known Limitations

As of 1.16, at which this feature is Alpha, there are some known limitations:

  • Scaling down a Deployment may result in imbalanced Pods distribution.
  • Pods matched on tainted nodes are respected. See Issue 80921

Feedback

Was this page helpful?

Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it onStack Overflow.Open an issue in the GitHub repo if you want toreport a problemorsuggest an improvement.