Placing pods relative to other pods using affinity and anti-affinity rules

Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node.

In OKD, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods.

Understanding pod affinity

Pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key/value labels on other pods.

  • Pod affinity can tell the scheduler to locate a new pod on the same node as other pods if the label selector on the new pod matches the label on the current pod.

  • Pod anti-affinity can prevent the scheduler from locating a new pod on the same node as pods with the same labels if the label selector on the new pod matches the label on the current pod.

For example, using affinity rules, you could spread or pack pods within a service or relative to pods in other services. Anti-affinity rules allow you to prevent pods of a particular service from scheduling on the same nodes as pods of another service that are known to interfere with the performance of the pods of the first service. Or, you could spread the pods of a service across nodes, availability zones, or availability sets to reduce correlated failures.

A label selector might match pods with multiple pod deployments. Use unique combinations of labels when configuring anti-affinity rules to avoid matching pods.

There are two types of pod affinity rules: required and preferred.

Required rules must be met before a pod can be scheduled on a node. Preferred rules specify that, if the rule is met, the scheduler tries to enforce the rules, but does not guarantee enforcement.

Depending on your pod priority and preemption settings, the scheduler might not be able to find an appropriate node for a pod without violating affinity requirements. If so, a pod might not be scheduled.

To prevent this situation, carefully configure pod affinity with equal-priority pods.

You configure pod affinity/anti-affinity through the Pod spec files. You can specify a required rule, a preferred rule, or both. If you specify both, the node must first meet the required rule, then attempts to meet the preferred rule.

The following example shows a Pod spec configured for pod affinity and anti-affinity.

In this example, the pod affinity rule indicates that the pod can schedule onto a node only if that node has at least one already-running pod with a label that has the key security and value S1. The pod anti-affinity rule says that the pod prefers to not schedule onto a node if that node is already running a pod with label having key security and value S2.

Sample Pod config file with pod affinity

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: with-pod-affinity
  5. spec:
  6. affinity:
  7. podAffinity: (1)
  8. requiredDuringSchedulingIgnoredDuringExecution: (2)
  9. - labelSelector:
  10. matchExpressions:
  11. - key: security (3)
  12. operator: In (4)
  13. values:
  14. - S1 (3)
  15. topologyKey: topology.kubernetes.io/zone
  16. containers:
  17. - name: with-pod-affinity
  18. image: docker.io/ocpqe/hello-pod
1Stanza to configure pod affinity.
2Defines a required rule.
3The key and value (label) that must be matched to apply the rule.
4The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In, NotIn, Exists, or DoesNotExist.

Sample Pod config file with pod anti-affinity

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: with-pod-antiaffinity
  5. spec:
  6. affinity:
  7. podAntiAffinity: (1)
  8. preferredDuringSchedulingIgnoredDuringExecution: (2)
  9. - weight: 100 (3)
  10. podAffinityTerm:
  11. labelSelector:
  12. matchExpressions:
  13. - key: security (4)
  14. operator: In (5)
  15. values:
  16. - S2
  17. topologyKey: kubernetes.io/hostname
  18. containers:
  19. - name: with-pod-affinity
  20. image: docker.io/ocpqe/hello-pod
1Stanza to configure pod anti-affinity.
2Defines a preferred rule.
3Specifies a weight for a preferred rule. The node with the highest weight is preferred.
4Description of the pod label that determines when the anti-affinity rule applies. Specify a key and value for the label.
5The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In, NotIn, Exists, or DoesNotExist.

If labels on a node change at runtime such that the affinity rules on a pod are no longer met, the pod continues to run on the node.

Configuring a pod affinity rule

The following steps demonstrate a simple two-pod configuration that creates pod with a label and a pod that uses affinity to allow scheduling with that pod.

You cannot add an affinity directly to a scheduled pod.

Procedure

  1. Create a pod with a specific label in the pod spec:

    1. Create a YAML file with the following content:

      1. apiVersion: v1
      2. kind: Pod
      3. metadata:
      4. name: security-s1
      5. labels:
      6. security: S1
      7. spec:
      8. containers:
      9. - name: security-s1
      10. image: docker.io/ocpqe/hello-pod
    2. Create the pod.

      1. $ oc create -f <pod-spec>.yaml
  2. When creating other pods, configure the following parameters to add the affinity:

    1. Create a YAML file with the following content:

      1. apiVersion: v1
      2. kind: Pod
      3. metadata:
      4. name: security-s1-east
      5. #...
      6. spec
      7. affinity (1)
      8. podAffinity:
      9. requiredDuringSchedulingIgnoredDuringExecution: (2)
      10. - labelSelector:
      11. matchExpressions:
      12. - key: security (3)
      13. values:
      14. - S1
      15. operator: In (4)
      16. topologyKey: topology.kubernetes.io/zone (5)
      17. #...
      1Adds a pod affinity.
      2Configures the requiredDuringSchedulingIgnoredDuringExecution parameter or the preferredDuringSchedulingIgnoredDuringExecution parameter.
      3Specifies the key and values that must be met. If you want the new pod to be scheduled with the other pod, use the same key and values parameters as the label on the first pod.
      4Specifies an operator. The operator can be In, NotIn, Exists, or DoesNotExist. For example, use the operator In to require the label to be in the node.
      5Specify a topologyKey, which is a prepopulated Kubernetes label that the system uses to denote such a topology domain.
    2. Create the pod.

      1. $ oc create -f <pod-spec>.yaml

Configuring a pod anti-affinity rule

The following steps demonstrate a simple two-pod configuration that creates pod with a label and a pod that uses an anti-affinity preferred rule to attempt to prevent scheduling with that pod.

You cannot add an affinity directly to a scheduled pod.

Procedure

  1. Create a pod with a specific label in the pod spec:

    1. Create a YAML file with the following content:

      1. apiVersion: v1
      2. kind: Pod
      3. metadata:
      4. name: security-s1
      5. labels:
      6. security: S1
      7. spec:
      8. containers:
      9. - name: security-s1
      10. image: docker.io/ocpqe/hello-pod
    2. Create the pod.

      1. $ oc create -f <pod-spec>.yaml
  2. When creating other pods, configure the following parameters:

    1. Create a YAML file with the following content:

      1. apiVersion: v1
      2. kind: Pod
      3. metadata:
      4. name: security-s2-east
      5. #...
      6. spec
      7. affinity (1)
      8. podAntiAffinity:
      9. preferredDuringSchedulingIgnoredDuringExecution: (2)
      10. - weight: 100 (3)
      11. podAffinityTerm:
      12. labelSelector:
      13. matchExpressions:
      14. - key: security (4)
      15. values:
      16. - S1
      17. operator: In (5)
      18. topologyKey: kubernetes.io/hostname (6)
      19. #...
      1Adds a pod anti-affinity.
      2Configures the requiredDuringSchedulingIgnoredDuringExecution parameter or the preferredDuringSchedulingIgnoredDuringExecution parameter.
      3For a preferred rule, specifies a weight for the node, 1-100. The node that with highest weight is preferred.
      4Specifies the key and values that must be met. If you want the new pod to not be scheduled with the other pod, use the same key and values parameters as the label on the first pod.
      5Specifies an operator. The operator can be In, NotIn, Exists, or DoesNotExist. For example, use the operator In to require the label to be in the node.
      6Specifies a topologyKey, which is a prepopulated Kubernetes label that the system uses to denote such a topology domain.
    2. Create the pod.

      1. $ oc create -f <pod-spec>.yaml

Sample pod affinity and anti-affinity rules

The following examples demonstrate pod affinity and pod anti-affinity.

Pod Affinity

The following example demonstrates pod affinity for pods with matching labels and label selectors.

  • The pod team4 has the label team:4.

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: team4
    5. labels:
    6. team: "4"
    7. #...
    8. spec:
    9. containers:
    10. - name: ocp
    11. image: docker.io/ocpqe/hello-pod
    12. #...
  • The pod team4a has the label selector team:4 under podAffinity.

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: team4a
    5. #...
    6. spec:
    7. affinity:
    8. podAffinity:
    9. requiredDuringSchedulingIgnoredDuringExecution:
    10. - labelSelector:
    11. matchExpressions:
    12. - key: team
    13. operator: In
    14. values:
    15. - "4"
    16. topologyKey: kubernetes.io/hostname
    17. containers:
    18. - name: pod-affinity
    19. image: docker.io/ocpqe/hello-pod
    20. #...
  • The team4a pod is scheduled on the same node as the team4 pod.

Pod Anti-affinity

The following example demonstrates pod anti-affinity for pods with matching labels and label selectors.

  • The pod pod-s1 has the label security:s1.

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: pod-s1
    5. labels:
    6. security: s1
    7. #...
    8. spec:
    9. containers:
    10. - name: ocp
    11. image: docker.io/ocpqe/hello-pod
    12. #...
  • The pod pod-s2 has the label selector security:s1 under podAntiAffinity.

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: pod-s2
    5. #...
    6. spec:
    7. affinity:
    8. podAntiAffinity:
    9. requiredDuringSchedulingIgnoredDuringExecution:
    10. - labelSelector:
    11. matchExpressions:
    12. - key: security
    13. operator: In
    14. values:
    15. - s1
    16. topologyKey: kubernetes.io/hostname
    17. containers:
    18. - name: pod-antiaffinity
    19. image: docker.io/ocpqe/hello-pod
    20. #...
  • The pod pod-s2 cannot be scheduled on the same node as pod-s1.

Pod Affinity with no Matching Labels

The following example demonstrates pod affinity for pods without matching labels and label selectors.

  • The pod pod-s1 has the label security:s1.

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: pod-s1
    5. labels:
    6. security: s1
    7. #...
    8. spec:
    9. containers:
    10. - name: ocp
    11. image: docker.io/ocpqe/hello-pod
    12. #...
  • The pod pod-s2 has the label selector security:s2.

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: pod-s2
    5. #...
    6. spec:
    7. affinity:
    8. podAffinity:
    9. requiredDuringSchedulingIgnoredDuringExecution:
    10. - labelSelector:
    11. matchExpressions:
    12. - key: security
    13. operator: In
    14. values:
    15. - s2
    16. topologyKey: kubernetes.io/hostname
    17. containers:
    18. - name: pod-affinity
    19. image: docker.io/ocpqe/hello-pod
    20. #...
  • The pod pod-s2 is not scheduled unless there is a node with a pod that has the security:s2 label. If there is no other pod with that label, the new pod remains in a pending state:

    Example output

    1. NAME READY STATUS RESTARTS AGE IP NODE
    2. pod-s2 0/1 Pending 0 32s <none>

Using pod affinity and anti-affinity to control where an Operator is installed

By default, when you install an Operator, OKD installs the Operator pod to one of your worker nodes randomly. However, there might be situations where you want that pod scheduled on a specific node or set of nodes.

The following examples describe situations where you might want to schedule an Operator pod to a specific node or set of nodes:

  • If an Operator requires a particular platform, such as amd64 or arm64

  • If an Operator requires a particular operating system, such as Linux or Windows

  • If you want Operators that work together scheduled on the same host or on hosts located on the same rack

  • If you want Operators dispersed throughout the infrastructure to avoid downtime due to network or hardware issues

You can control where an Operator pod is installed by adding a pod affinity or anti-affinity to the Operator’s Subscription object.

The following example shows how to use pod anti-affinity to prevent the installation the Custom Metrics Autoscaler Operator from any node that has pods with a specific label:

Pod affinity example that places the Operator pod on one or more specific nodes

  1. apiVersion: operators.coreos.com/v1alpha1
  2. kind: Subscription
  3. metadata:
  4. name: openshift-custom-metrics-autoscaler-operator
  5. namespace: openshift-keda
  6. spec:
  7. name: my-package
  8. source: my-operators
  9. sourceNamespace: operator-registries
  10. config:
  11. affinity:
  12. podAffinity: (1)
  13. requiredDuringSchedulingIgnoredDuringExecution:
  14. - labelSelector:
  15. matchExpressions:
  16. - key: app
  17. operator: In
  18. values:
  19. - test
  20. topologyKey: kubernetes.io/hostname
  21. #...
1A pod affinity that places the Operator’s pod on a node that has pods with the app=test label.

Pod anti-affinity example that prevents the Operator pod from one or more specific nodes

  1. apiVersion: operators.coreos.com/v1alpha1
  2. kind: Subscription
  3. metadata:
  4. name: openshift-custom-metrics-autoscaler-operator
  5. namespace: openshift-keda
  6. spec:
  7. name: my-package
  8. source: my-operators
  9. sourceNamespace: operator-registries
  10. config:
  11. affinity:
  12. podAntiAffinity: (1)
  13. requiredDuringSchedulingIgnoredDuringExecution:
  14. - labelSelector:
  15. matchExpressions:
  16. - key: cpu
  17. operator: In
  18. values:
  19. - high
  20. topologyKey: kubernetes.io/hostname
  21. #...
1A pod anti-affinity that prevents the Operator’s pod from being scheduled on a node that has pods with the cpu=high label.

Procedure

To control the placement of an Operator pod, complete the following steps:

  1. Install the Operator as usual.

  2. If needed, ensure that your nodes are labeled to properly respond to the affinity.

  3. Edit the Operator Subscription object to add an affinity:

    1. apiVersion: operators.coreos.com/v1alpha1
    2. kind: Subscription
    3. metadata:
    4. name: openshift-custom-metrics-autoscaler-operator
    5. namespace: openshift-keda
    6. spec:
    7. name: my-package
    8. source: my-operators
    9. sourceNamespace: operator-registries
    10. config:
    11. affinity:
    12. podAntiAffinity: (1)
    13. requiredDuringSchedulingIgnoredDuringExecution:
    14. podAffinityTerm:
    15. labelSelector:
    16. matchExpressions:
    17. - key: kubernetes.io/hostname
    18. operator: In
    19. values:
    20. - ip-10-0-185-229.ec2.internal
    21. topologyKey: topology.kubernetes.io/zone
    22. #...
    1Add a podAffinity or podAntiAffinity.

Verification

  • To ensure that the pod is deployed on the specific node, run the following command:

    1. $ oc get pods -o wide

    Example output

    1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    2. custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>