Controlling pod placement by using pod topology spread constraints

You can use pod topology spread constraints to control the placement of your pods across nodes, zones, regions, or other user-defined topology domains.

About pod topology spread constraints

By using a pod topology spread constraint, you provide fine-grained control over the distribution of pods across failure domains to help achieve high availability and more efficient resource utilization.

OKD administrators can label nodes to provide topology information, such as regions, zones, nodes, or other user-defined domains. After these labels are set on nodes, users can then define pod topology spread constraints to control the placement of pods across these topology domains.

You specify which pods to group together, which topology domains they are spread among, and the acceptable skew. Only pods within the same namespace are matched and grouped together when spreading due to a constraint.

Configuring pod topology spread constraints

The following steps demonstrate how to configure pod topology spread constraints to distribute pods that match the specified labels based on their zone.

You can specify multiple pod topology spread constraints, but you must ensure that they do not conflict with each other. All pod topology spread constraints must be satisfied for a pod to be placed.

Prerequisites

  • A user with the cluster-admin role has added the required labels to nodes.

Procedure

  1. Create a Pod spec and specify a pod topology spread constraint:

    Example pod-spec.yaml file

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: my-pod
    5. labels:
    6. region: us-east
    7. spec:
    8. topologySpreadConstraints:
    9. - maxSkew: 1 (1)
    10. topologyKey: topology.kubernetes.io/zone (2)
    11. whenUnsatisfiable: DoNotSchedule (3)
    12. labelSelector: (4)
    13. matchLabels:
    14. region: us-east (5)
    15. matchLabelKeys:
    16. - my-pod-label (6)
    17. containers:
    18. - image: "docker.io/ocpqe/hello-pod"
    19. name: hello-pod
    1The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0.
    2The key of a node label. Nodes with this key and identical value are considered to be in the same topology.
    3How to handle a pod if it does not satisfy the spread constraint. The default is DoNotSchedule, which tells the scheduler not to schedule the pod. Set to ScheduleAnyway to still schedule the pod, but the scheduler prioritizes honoring the skew to not make the cluster more imbalanced.
    4Pods that match this label selector are counted and recognized as a group when spreading to satisfy the constraint. Be sure to specify a label selector, otherwise no pods can be matched.
    5Be sure that this Pod spec also sets its labels to match this label selector if you want it to be counted properly in the future.
    6A list of pod label keys to select which pods to calculate spreading over.
  2. Create the pod:

    1. $ oc create -f pod-spec.yaml

Example pod topology spread constraints

The following examples demonstrate pod topology spread constraint configurations.

Single pod topology spread constraint example

This example Pod spec defines one pod topology spread constraint. It matches on pods labeled region: us-east, distributes among zones, specifies a skew of 1, and does not schedule the pod if it does not meet these requirements.

  1. kind: Pod
  2. apiVersion: v1
  3. metadata:
  4. name: my-pod
  5. labels:
  6. region: us-east
  7. spec:
  8. topologySpreadConstraints:
  9. - maxSkew: 1
  10. topologyKey: topology.kubernetes.io/zone
  11. whenUnsatisfiable: DoNotSchedule
  12. labelSelector:
  13. matchLabels:
  14. region: us-east
  15. containers:
  16. - image: "docker.io/ocpqe/hello-pod"
  17. name: hello-pod

Multiple pod topology spread constraints example

This example Pod spec defines two pod topology spread constraints. Both match on pods labeled region: us-east, specify a skew of 1, and do not schedule the pod if it does not meet these requirements.

The first constraint distributes pods based on a user-defined label node, and the second constraint distributes pods based on a user-defined label rack. Both constraints must be met for the pod to be scheduled.

  1. kind: Pod
  2. apiVersion: v1
  3. metadata:
  4. name: my-pod-2
  5. labels:
  6. region: us-east
  7. spec:
  8. topologySpreadConstraints:
  9. - maxSkew: 1
  10. topologyKey: node
  11. whenUnsatisfiable: DoNotSchedule
  12. labelSelector:
  13. matchLabels:
  14. region: us-east
  15. - maxSkew: 1
  16. topologyKey: rack
  17. whenUnsatisfiable: DoNotSchedule
  18. labelSelector:
  19. matchLabels:
  20. region: us-east
  21. containers:
  22. - image: "docker.io/ocpqe/hello-pod"
  23. name: hello-pod

Additional resources