Configuring the default scheduler to control pod placement

The default OKD pod scheduler is responsible for determining placement of new pods onto nodes within the cluster. It reads data from the pod and tries to find a node that is a good fit based on configured policies. It is completely independent and exists as a standalone/pluggable solution. It does not modify the pod and just creates a binding for the pod that ties the pod to the particular node.

Configuring a scheduler policy is deprecated and is planned for removal in a future release. For more information on the Technology Preview alternative, see Scheduling pods using a scheduler profile.

A selection of predicates and priorities defines the policy for the scheduler. See Modifying scheduler policy for a list of predicates and priorities.

Sample default scheduler object

  1. apiVersion: config.openshift.io/v1
  2. kind: Scheduler
  3. metadata:
  4. annotations:
  5. release.openshift.io/create-only: "true"
  6. creationTimestamp: 2019-05-20T15:39:01Z
  7. generation: 1
  8. name: cluster
  9. resourceVersion: "1491"
  10. selfLink: /apis/config.openshift.io/v1/schedulers/cluster
  11. uid: 6435dd99-7b15-11e9-bd48-0aec821b8e34
  12. spec:
  13. policy: (1)
  14. name: scheduler-policy
  15. defaultNodeSelector: type=user-node,region=east (2)
1You can specify the name of a custom scheduler policy file.
2Optional: Specify a default node selector to restrict pod placement to specific nodes. The default node selector is applied to the pods created in all namespaces. Pods can be scheduled on nodes with labels that match the default node selector and any existing pod node selectors. Namespaces having project-wide node selectors are not impacted even if this field is set.

Understanding default scheduling

The existing generic scheduler is the default platform-provided scheduler engine that selects a node to host the pod in a three-step operation:

Filters the Nodes

The available nodes are filtered based on the constraints or requirements specified. This is done by running each node through the list of filter functions called predicates.

Prioritize the Filtered List of Nodes

This is achieved by passing each node through a series of priority_ functions that assign it a score between 0 - 10, with 0 indicating a bad fit and 10 indicating a good fit to host the pod. The scheduler configuration can also take in a simple weight (positive numeric value) for each priority function. The node score provided by each priority function is multiplied by the weight (default weight for most priorities is 1) and then combined by adding the scores for each node provided by all the priorities. This weight attribute can be used by administrators to give higher importance to some priorities.

Select the Best Fit Node

The nodes are sorted based on their scores and the node with the highest score is selected to host the pod. If multiple nodes have the same high score, then one of them is selected at random.

Understanding Scheduler Policy

The selection of the predicate and priorities defines the policy for the scheduler.

The scheduler configuration file is a JSON file, which must be named policy.cfg, that specifies the predicates and priorities the scheduler will consider.

In the absence of the scheduler policy file, the default scheduler behavior is used.

The predicates and priorities defined in the scheduler configuration file completely override the default scheduler policy. If any of the default predicates and priorities are required, you must explicitly specify the functions in the policy configuration.

Sample scheduler config map

  1. apiVersion: v1
  2. data:
  3. policy.cfg: |
  4. {
  5. "kind" : "Policy",
  6. "apiVersion" : "v1",
  7. "predicates" : [
  8. {"name" : "MaxGCEPDVolumeCount"},
  9. {"name" : "GeneralPredicates"}, (1)
  10. {"name" : "MaxAzureDiskVolumeCount"},
  11. {"name" : "MaxCSIVolumeCountPred"},
  12. {"name" : "CheckVolumeBinding"},
  13. {"name" : "MaxEBSVolumeCount"},
  14. {"name" : "MatchInterPodAffinity"},
  15. {"name" : "CheckNodeUnschedulable"},
  16. {"name" : "NoDiskConflict"},
  17. {"name" : "NoVolumeZoneConflict"},
  18. {"name" : "PodToleratesNodeTaints"}
  19. ],
  20. "priorities" : [
  21. {"name" : "LeastRequestedPriority", "weight" : 1},
  22. {"name" : "BalancedResourceAllocation", "weight" : 1},
  23. {"name" : "ServiceSpreadingPriority", "weight" : 1},
  24. {"name" : "NodePreferAvoidPodsPriority", "weight" : 1},
  25. {"name" : "NodeAffinityPriority", "weight" : 1},
  26. {"name" : "TaintTolerationPriority", "weight" : 1},
  27. {"name" : "ImageLocalityPriority", "weight" : 1},
  28. {"name" : "SelectorSpreadPriority", "weight" : 1},
  29. {"name" : "InterPodAffinityPriority", "weight" : 1},
  30. {"name" : "EqualPriority", "weight" : 1}
  31. ]
  32. }
  33. kind: ConfigMap
  34. metadata:
  35. creationTimestamp: "2019-09-17T08:42:33Z"
  36. name: scheduler-policy
  37. namespace: openshift-config
  38. resourceVersion: "59500"
  39. selfLink: /api/v1/namespaces/openshift-config/configmaps/scheduler-policy
  40. uid: 17ee8865-d927-11e9-b213-02d1e1709840`
1The GeneralPredicates predicate represents the PodFitsResources, HostName, PodFitsHostPorts, and MatchNodeSelector predicates. Because you are not allowed to configure the same predicate multiple times, the GeneralPredicates predicate cannot be used alongside any of the four represented predicates.

Creating a scheduler policy file

You can change the default scheduling behavior by creating a JSON file with the desired predicates and priorities. You then generate a config map from the JSON file and point the cluster Scheduler object to use the config map.

Procedure

To configure the scheduler policy:

  1. Create a JSON file named policy.cfg with the desired predicates and priorities.

    Sample scheduler JSON file

    1. {
    2. "kind" : "Policy",
    3. "apiVersion" : "v1",
    4. "predicates" : [ (1)
    5. {"name" : "MaxGCEPDVolumeCount"},
    6. {"name" : "GeneralPredicates"},
    7. {"name" : "MaxAzureDiskVolumeCount"},
    8. {"name" : "MaxCSIVolumeCountPred"},
    9. {"name" : "CheckVolumeBinding"},
    10. {"name" : "MaxEBSVolumeCount"},
    11. {"name" : "MatchInterPodAffinity"},
    12. {"name" : "CheckNodeUnschedulable"},
    13. {"name" : "NoDiskConflict"},
    14. {"name" : "NoVolumeZoneConflict"},
    15. {"name" : "PodToleratesNodeTaints"}
    16. ],
    17. "priorities" : [ (2)
    18. {"name" : "LeastRequestedPriority", "weight" : 1},
    19. {"name" : "BalancedResourceAllocation", "weight" : 1},
    20. {"name" : "ServiceSpreadingPriority", "weight" : 1},
    21. {"name" : "NodePreferAvoidPodsPriority", "weight" : 1},
    22. {"name" : "NodeAffinityPriority", "weight" : 1},
    23. {"name" : "TaintTolerationPriority", "weight" : 1},
    24. {"name" : "ImageLocalityPriority", "weight" : 1},
    25. {"name" : "SelectorSpreadPriority", "weight" : 1},
    26. {"name" : "InterPodAffinityPriority", "weight" : 1},
    27. {"name" : "EqualPriority", "weight" : 1}
    28. ]
    29. }
    1Add the predicates as needed.
    2Add the priorities as needed.
  2. Create a config map based on the scheduler JSON file:

    1. $ oc create configmap -n openshift-config --from-file=policy.cfg <configmap-name> (1)
    1Enter a name for the config map.

    For example:

    1. $ oc create configmap -n openshift-config --from-file=policy.cfg scheduler-policy

    Example output

    1. configmap/scheduler-policy created
  3. Edit the Scheduler Operator custom resource to add the config map:

    1. $ oc patch Scheduler cluster --type='merge' -p '{"spec":{"policy":{"name":"<configmap-name>"}}}' --type=merge (1)
    1Specify the name of the config map.

    For example:

    1. $ oc patch Scheduler cluster --type='merge' -p '{"spec":{"policy":{"name":"scheduler-policy"}}}' --type=merge

    After making the change to the Scheduler config resource, wait for the openshift-kube-apiserver pods to redeploy. This can take several minutes. Until the pods redeploy, new scheduler does not take effect.

  4. Verify the scheduler policy is configured by viewing the log of a scheduler pod in the openshift-kube-scheduler namespace. The following command checks for the predicates and priorities that are being registered by the scheduler:

    1. $ oc logs <scheduler-pod> | grep predicates

    For example:

    1. $ oc logs openshift-kube-scheduler-ip-10-0-141-29.ec2.internal | grep predicates

    Example output

    1. Creating scheduler with fit predicates 'map[MaxGCEPDVolumeCount:{} MaxAzureDiskVolumeCount:{} CheckNodeUnschedulable:{} NoDiskConflict:{} NoVolumeZoneConflict:{} GeneralPredicates:{} MaxCSIVolumeCountPred:{} CheckVolumeBinding:{} MaxEBSVolumeCount:{} MatchInterPodAffinity:{} PodToleratesNodeTaints:{}]' and priority functions 'map[InterPodAffinityPriority:{} LeastRequestedPriority:{} ServiceSpreadingPriority:{} ImageLocalityPriority:{} SelectorSpreadPriority:{} EqualPriority:{} BalancedResourceAllocation:{} NodePreferAvoidPodsPriority:{} NodeAffinityPriority:{} TaintTolerationPriority:{}]'

Modifying scheduler policies

You change scheduling behavior by creating or editing your scheduler policy config map in the openshift-config project. Add and remove predicates and priorities to the config map to create a scheduler policy.

Procedure

To modify the current custom scheduling, use one of the following methods:

  • Edit the scheduler policy config map:

    1. $ oc edit configmap <configmap-name> -n openshift-config

    For example:

    1. $ oc edit configmap scheduler-policy -n openshift-config

    Example output

    1. apiVersion: v1
    2. data:
    3. policy.cfg: |
    4. {
    5. "kind" : "Policy",
    6. "apiVersion" : "v1",
    7. "predicates" : [ (1)
    8. {"name" : "MaxGCEPDVolumeCount"},
    9. {"name" : "GeneralPredicates"},
    10. {"name" : "MaxAzureDiskVolumeCount"},
    11. {"name" : "MaxCSIVolumeCountPred"},
    12. {"name" : "CheckVolumeBinding"},
    13. {"name" : "MaxEBSVolumeCount"},
    14. {"name" : "MatchInterPodAffinity"},
    15. {"name" : "CheckNodeUnschedulable"},
    16. {"name" : "NoDiskConflict"},
    17. {"name" : "NoVolumeZoneConflict"},
    18. {"name" : "PodToleratesNodeTaints"}
    19. ],
    20. "priorities" : [ (2)
    21. {"name" : "LeastRequestedPriority", "weight" : 1},
    22. {"name" : "BalancedResourceAllocation", "weight" : 1},
    23. {"name" : "ServiceSpreadingPriority", "weight" : 1},
    24. {"name" : "NodePreferAvoidPodsPriority", "weight" : 1},
    25. {"name" : "NodeAffinityPriority", "weight" : 1},
    26. {"name" : "TaintTolerationPriority", "weight" : 1},
    27. {"name" : "ImageLocalityPriority", "weight" : 1},
    28. {"name" : "SelectorSpreadPriority", "weight" : 1},
    29. {"name" : "InterPodAffinityPriority", "weight" : 1},
    30. {"name" : "EqualPriority", "weight" : 1}
    31. ]
    32. }
    33. kind: ConfigMap
    34. metadata:
    35. creationTimestamp: "2019-09-17T17:44:19Z"
    36. name: scheduler-policy
    37. namespace: openshift-config
    38. resourceVersion: "15370"
    39. selfLink: /api/v1/namespaces/openshift-config/configmaps/scheduler-policy
    1Add or remove predicates as needed.
    2Add, remove, or change the weight of predicates as needed.

    It can take a few minutes for the scheduler to restart the pods with the updated policy.

  • Change the policies and predicates being used:

    1. Remove the scheduler policy config map:

      1. $ oc delete configmap -n openshift-config <name>

      For example:

      1. $ oc delete configmap -n openshift-config scheduler-policy
    2. Edit the policy.cfg file to add and remove policies and predicates as needed.

      For example:

      1. $ vi policy.cfg

      Example output

      1. apiVersion: v1
      2. data:
      3. policy.cfg: |
      4. {
      5. "kind" : "Policy",
      6. "apiVersion" : "v1",
      7. "predicates" : [
      8. {"name" : "MaxGCEPDVolumeCount"},
      9. {"name" : "GeneralPredicates"},
      10. {"name" : "MaxAzureDiskVolumeCount"},
      11. {"name" : "MaxCSIVolumeCountPred"},
      12. {"name" : "CheckVolumeBinding"},
      13. {"name" : "MaxEBSVolumeCount"},
      14. {"name" : "MatchInterPodAffinity"},
      15. {"name" : "CheckNodeUnschedulable"},
      16. {"name" : "NoDiskConflict"},
      17. {"name" : "NoVolumeZoneConflict"},
      18. {"name" : "PodToleratesNodeTaints"}
      19. ],
      20. "priorities" : [
      21. {"name" : "LeastRequestedPriority", "weight" : 1},
      22. {"name" : "BalancedResourceAllocation", "weight" : 1},
      23. {"name" : "ServiceSpreadingPriority", "weight" : 1},
      24. {"name" : "NodePreferAvoidPodsPriority", "weight" : 1},
      25. {"name" : "NodeAffinityPriority", "weight" : 1},
      26. {"name" : "TaintTolerationPriority", "weight" : 1},
      27. {"name" : "ImageLocalityPriority", "weight" : 1},
      28. {"name" : "SelectorSpreadPriority", "weight" : 1},
      29. {"name" : "InterPodAffinityPriority", "weight" : 1},
      30. {"name" : "EqualPriority", "weight" : 1}
      31. ]
      32. }
    3. Re-create the scheduler policy config map based on the scheduler JSON file:

      1. $ oc create configmap -n openshift-config --from-file=policy.cfg <configmap-name> (1)
      1Enter a name for the config map.

      For example:

      1. $ oc create configmap -n openshift-config --from-file=policy.cfg scheduler-policy

      Example output

      1. configmap/scheduler-policy created

Understanding the scheduler predicates

Predicates are rules that filter out unqualified nodes.

There are several predicates provided by default in OKD. Some of these predicates can be customized by providing certain parameters. Multiple predicates can be combined to provide additional filtering of nodes.

Static Predicates

These predicates do not take any configuration parameters or inputs from the user. These are specified in the scheduler configuration using their exact name.

Default Predicates

The default scheduler policy includes the following predicates:

The NoVolumeZoneConflict predicate checks that the volumes a pod requests are available in the zone.

  1. {"name" : "NoVolumeZoneConflict"}

The MaxEBSVolumeCount predicate checks the maximum number of volumes that can be attached to an AWS instance.

  1. {"name" : "MaxEBSVolumeCount"}

The MaxAzureDiskVolumeCount predicate checks the maximum number of Azure Disk Volumes.

  1. {"name" : "MaxAzureDiskVolumeCount"}

The PodToleratesNodeTaints predicate checks if a pod can tolerate the node taints.

  1. {"name" : "PodToleratesNodeTaints"}

The CheckNodeUnschedulable predicate checks if a pod can be scheduled on a node with Unschedulable spec.

  1. {"name" : "CheckNodeUnschedulable"}

The CheckVolumeBinding predicate evaluates if a pod can fit based on the volumes, it requests, for both bound and unbound PVCs.

  • For PVCs that are bound, the predicate checks that the corresponding PV’s node affinity is satisfied by the given node.

  • For PVCs that are unbound, the predicate searched for available PVs that can satisfy the PVC requirements and that the PV node affinity is satisfied by the given node.

The predicate returns true if all bound PVCs have compatible PVs with the node, and if all unbound PVCs can be matched with an available and node-compatible PV.

  1. {"name" : "CheckVolumeBinding"}

The NoDiskConflict predicate checks if the volume requested by a pod is available.

  1. {"name" : "NoDiskConflict"}

The MaxGCEPDVolumeCount predicate checks the maximum number of Google Compute Engine (GCE) Persistent Disks (PD).

  1. {"name" : "MaxGCEPDVolumeCount"}

The MaxCSIVolumeCount predicate determines how many Container Storage Interface (CSI) volumes should be attached to a node and whether that number exceeds a configured limit.

  1. {"name" : "MaxCSIVolumeCount"}

The MatchInterPodAffinity predicate checks if the pod affinity/anti-affinity rules permit the pod.

  1. {"name" : "MatchInterPodAffinity"}
Other Static Predicates

OKD also supports the following predicates:

The CheckNode-* predicates cannot be used if the Taint Nodes By Condition feature is enabled. The Taint Nodes By Condition feature is enabled by default.

The CheckNodeCondition predicate checks if a pod can be scheduled on a node reporting out of disk, network unavailable, or not ready conditions.

  1. {"name" : "CheckNodeCondition"}

The CheckNodeLabelPresence predicate checks if all of the specified labels exist on a node, regardless of their value.

  1. {"name" : "CheckNodeLabelPresence"}

The checkServiceAffinity predicate checks that ServiceAffinity labels are homogeneous for pods that are scheduled on a node.

  1. {"name" : "checkServiceAffinity"}

The PodToleratesNodeNoExecuteTaints predicate checks if a pod tolerations can tolerate a node NoExecute taints.

  1. {"name" : "PodToleratesNodeNoExecuteTaints"}

General Predicates

The following general predicates check whether non-critical predicates and essential predicates pass. Non-critical predicates are the predicates that only non-critical pods must pass and essential predicates are the predicates that all pods must pass.

The default scheduler policy includes the general predicates.

Non-critical general predicates

The PodFitsResources predicate determines a fit based on resource availability (CPU, memory, GPU, and so forth). The nodes can declare their resource capacities and then pods can specify what resources they require. Fit is based on requested, rather than used resources.

  1. {"name" : "PodFitsResources"}
Essential general predicates

The PodFitsHostPorts predicate determines if a node has free ports for the requested pod ports (absence of port conflicts).

  1. {"name" : "PodFitsHostPorts"}

The HostName predicate determines fit based on the presence of the Host parameter and a string match with the name of the host.

  1. {"name" : "HostName"}

The MatchNodeSelector predicate determines fit based on node selector (nodeSelector) queries defined in the pod.

  1. {"name" : "MatchNodeSelector"}

Understanding the scheduler priorities

Priorities are rules that rank nodes according to preferences.

A custom set of priorities can be specified to configure the scheduler. There are several priorities provided by default in OKD. Other priorities can be customized by providing certain parameters. Multiple priorities can be combined and different weights can be given to each to impact the prioritization.

Static Priorities

Static priorities do not take any configuration parameters from the user, except weight. A weight is required to be specified and cannot be 0 or negative.

These are specified in the scheduler policy config map in the openshift-config project.

Default Priorities

The default scheduler policy includes the following priorities. Each of the priority function has a weight of 1 except NodePreferAvoidPodsPriority, which has a weight of 10000.

The NodeAffinityPriority priority prioritizes nodes according to node affinity scheduling preferences

  1. {"name" : "NodeAffinityPriority", "weight" : 1}

The TaintTolerationPriority priority prioritizes nodes that have a fewer number of intolerable taints on them for a pod. An intolerable taint is one which has key PreferNoSchedule.

  1. {"name" : "TaintTolerationPriority", "weight" : 1}

The ImageLocalityPriority priority prioritizes nodes that already have requested pod container’s images.

  1. {"name" : "ImageLocalityPriority", "weight" : 1}

The SelectorSpreadPriority priority looks for services, replication controllers (RC), replication sets (RS), and stateful sets that match the pod, then finds existing pods that match those selectors. The scheduler favors nodes that have fewer existing matching pods. Then, it schedules the pod on a node with the smallest number of pods that match those selectors as the pod being scheduled.

  1. {"name" : "SelectorSpreadPriority", "weight" : 1}

The InterPodAffinityPriority priority computes a sum by iterating through the elements of weightedPodAffinityTerm and adding weight to the sum if the corresponding PodAffinityTerm is satisfied for that node. The node(s) with the highest sum are the most preferred.

  1. {"name" : "InterPodAffinityPriority", "weight" : 1}

The LeastRequestedPriority priority favors nodes with fewer requested resources. It calculates the percentage of memory and CPU requested by pods scheduled on the node, and prioritizes nodes that have the highest available/remaining capacity.

  1. {"name" : "LeastRequestedPriority", "weight" : 1}

The BalancedResourceAllocation priority favors nodes with balanced resource usage rate. It calculates the difference between the consumed CPU and memory as a fraction of capacity, and prioritizes the nodes based on how close the two metrics are to each other. This should always be used together with LeastRequestedPriority.

  1. {"name" : "BalancedResourceAllocation", "weight" : 1}

The NodePreferAvoidPodsPriority priority ignores pods that are owned by a controller other than a replication controller.

  1. {"name" : "NodePreferAvoidPodsPriority", "weight" : 10000}
Other Static Priorities

OKD also supports the following priorities:

The EqualPriority priority gives an equal weight of 1 to all nodes, if no priority configurations are provided. We recommend using this priority only for testing environments.

  1. {"name" : "EqualPriority", "weight" : 1}

The MostRequestedPriority priority prioritizes nodes with most requested resources. It calculates the percentage of memory and CPU requested by pods scheduled on the node, and prioritizes based on the maximum of the average of the fraction of requested to capacity.

  1. {"name" : "MostRequestedPriority", "weight" : 1}

The ServiceSpreadingPriority priority spreads pods by minimizing the number of pods belonging to the same service onto the same machine.

  1. {"name" : "ServiceSpreadingPriority", "weight" : 1}

Configurable Priorities

You can configure these priorities in the scheduler policy config map, in the openshift-config namespace, to add labels to affect how the priorities work.

The type of the priority function is identified by the argument that they take. Since these are configurable, multiple priorities of the same type (but different configuration parameters) can be combined as long as their user-defined names are different.

For information on using these priorities, see Modifying Scheduler Policy.

The ServiceAntiAffinity priority takes a label and ensures a good spread of the pods belonging to the same service across the group of nodes based on the label values. It gives the same score to all nodes that have the same value for the specified label. It gives a higher score to nodes within a group with the least concentration of pods.

  1. {
  2. "kind": "Policy",
  3. "apiVersion": "v1",
  4. "priorities":[
  5. {
  6. "name":"<name>", (1)
  7. "weight" : 1 (2)
  8. "argument":{
  9. "serviceAntiAffinity":{
  10. "label": "<label>" (3)
  11. }
  12. }
  13. }
  14. ]
  15. }
1Specify a name for the priority.
2Specify a weight. Enter a non-zero positive value.
3Specify a label to match.

For example:

  1. {
  2. "kind": "Policy",
  3. "apiVersion": "v1",
  4. "priorities": [
  5. {
  6. "name":"RackSpread",
  7. "weight" : 1,
  8. "argument": {
  9. "serviceAntiAffinity": {
  10. "label": "rack"
  11. }
  12. }
  13. }
  14. ]
  15. }

In some situations using the ServiceAntiAffinity parameter based on custom labels does not spread pod as expected. See this Red Hat Solution.

The labelPreference parameter gives priority based on the specified label. If the label is present on a node, that node is given priority. If no label is specified, priority is given to nodes that do not have a label. If multiple priorities with the labelPreference parameter are set, all of the priorities must have the same weight.

  1. {
  2. "kind": "Policy",
  3. "apiVersion": "v1",
  4. "priorities":[
  5. {
  6. "name":"<name>", (1)
  7. "weight" : 1 (2)
  8. "argument":{
  9. "labelPreference":{
  10. "label": "<label>", (3)
  11. "presence": true (4)
  12. }
  13. }
  14. }
  15. ]
  16. }
1Specify a name for the priority.
2Specify a weight. Enter a non-zero positive value.
3Specify a label to match.
4Specify whether the label is required, either true or false.

Sample Policy Configurations

The configuration below specifies the default scheduler configuration, if it were to be specified using the scheduler policy file.

  1. {
  2. "kind": "Policy",
  3. "apiVersion": "v1",
  4. "predicates": [
  5. {
  6. "name": "RegionZoneAffinity", (1)
  7. "argument": {
  8. "serviceAffinity": { (2)
  9. "labels": ["region, zone"] (3)
  10. }
  11. }
  12. }
  13. ],
  14. "priorities": [
  15. {
  16. "name":"RackSpread", (4)
  17. "weight" : 1,
  18. "argument": {
  19. "serviceAntiAffinity": { (5)
  20. "label": "rack" (6)
  21. }
  22. }
  23. }
  24. ]
  25. }
1The name for the predicate.
2The type of predicate.
3The labels for the predicate.
4The name for the priority.
5The type of priority.
6The labels for the priority.

In all of the sample configurations below, the list of predicates and priority functions is truncated to include only the ones that pertain to the use case specified. In practice, a complete/meaningful scheduler policy should include most, if not all, of the default predicates and priorities listed above.

The following example defines three topological levels, region (affinity) → zone (affinity) → rack (anti-affinity):

  1. {
  2. "kind": "Policy",
  3. "apiVersion": "v1",
  4. "predicates": [
  5. {
  6. "name": "RegionZoneAffinity",
  7. "argument": {
  8. "serviceAffinity": {
  9. "labels": ["region, zone"]
  10. }
  11. }
  12. }
  13. ],
  14. "priorities": [
  15. {
  16. "name":"RackSpread",
  17. "weight" : 1,
  18. "argument": {
  19. "serviceAntiAffinity": {
  20. "label": "rack"
  21. }
  22. }
  23. }
  24. ]
  25. }

The following example defines three topological levels, city (affinity) → building (anti-affinity) → room (anti-affinity):

  1. {
  2. "kind": "Policy",
  3. "apiVersion": "v1",
  4. "predicates": [
  5. {
  6. "name": "CityAffinity",
  7. "argument": {
  8. "serviceAffinity": {
  9. "label": "city"
  10. }
  11. }
  12. }
  13. ],
  14. "priorities": [
  15. {
  16. "name":"BuildingSpread",
  17. "weight" : 1,
  18. "argument": {
  19. "serviceAntiAffinity": {
  20. "label": "building"
  21. }
  22. }
  23. },
  24. {
  25. "name":"RoomSpread",
  26. "weight" : 1,
  27. "argument": {
  28. "serviceAntiAffinity": {
  29. "label": "room"
  30. }
  31. }
  32. }
  33. ]
  34. }

The following example defines a policy to only use nodes with the ‘region’ label defined and prefer nodes with the ‘zone’ label defined:

  1. {
  2. "kind": "Policy",
  3. "apiVersion": "v1",
  4. "predicates": [
  5. {
  6. "name": "RequireRegion",
  7. "argument": {
  8. "labelPreference": {
  9. "labels": ["region"],
  10. "presence": true
  11. }
  12. }
  13. }
  14. ],
  15. "priorities": [
  16. {
  17. "name":"ZonePreferred",
  18. "weight" : 1,
  19. "argument": {
  20. "labelPreference": {
  21. "label": "zone",
  22. "presence": true
  23. }
  24. }
  25. }
  26. ]
  27. }

The following example combines both static and configurable predicates and also priorities:

  1. {
  2. "kind": "Policy",
  3. "apiVersion": "v1",
  4. "predicates": [
  5. {
  6. "name": "RegionAffinity",
  7. "argument": {
  8. "serviceAffinity": {
  9. "labels": ["region"]
  10. }
  11. }
  12. },
  13. {
  14. "name": "RequireRegion",
  15. "argument": {
  16. "labelsPresence": {
  17. "labels": ["region"],
  18. "presence": true
  19. }
  20. }
  21. },
  22. {
  23. "name": "BuildingNodesAvoid",
  24. "argument": {
  25. "labelsPresence": {
  26. "label": "building",
  27. "presence": false
  28. }
  29. }
  30. },
  31. {"name" : "PodFitsPorts"},
  32. {"name" : "MatchNodeSelector"}
  33. ],
  34. "priorities": [
  35. {
  36. "name": "ZoneSpread",
  37. "weight" : 2,
  38. "argument": {
  39. "serviceAntiAffinity":{
  40. "label": "zone"
  41. }
  42. }
  43. },
  44. {
  45. "name":"ZonePreferred",
  46. "weight" : 1,
  47. "argument": {
  48. "labelPreference":{
  49. "label": "zone",
  50. "presence": true
  51. }
  52. }
  53. },
  54. {"name" : "ServiceSpreadingPriority", "weight" : 1}
  55. ]
  56. }