Evicting pods using the descheduler

While the scheduler is used to determine the most suitable node to host a new pod, the descheduler can be used to evict a running pod so that the pod can be rescheduled onto a more suitable node.

The descheduler is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

About the descheduler

You can use the descheduler to evict pods based on specific strategies so that the pods can be rescheduled onto more appropriate nodes.

You can benefit from descheduling running pods in situations such as the following:

  • Nodes are underutilized or overutilized.

  • Pod and node affinity requirements, such as taints or labels, have changed and the original scheduling decisions are no longer appropriate for certain nodes.

  • Node failure requires pods to be moved.

  • New nodes are added to clusters.

  • Pods have been restarted too many times.

The descheduler does not schedule replacement of evicted pods. The scheduler automatically performs this task for the evicted pods.

When the descheduler decides to evict pods from a node, it employs the following general mechanism:

  • Critical pods with priorityClassName set to system-cluster-critical or system-node-critical are never evicted.

  • Static, mirrored, or stand-alone pods that are not part of a replication controller, replica set, deployment, or job are never evicted because these pods will not be recreated.

  • Pods associated with daemon sets are never evicted.

  • Pods with local storage are never evicted.

  • Best effort pods are evicted before burstable and guaranteed pods.

  • All types of pods with the descheduler.alpha.kubernetes.io/evict annotation are evicted. This annotation is used to override checks that prevent eviction, and the user can select which pod is evicted. Users should know how and if the pod will be recreated.

  • Pods subject to pod disruption budget (PDB) are not evicted if descheduling violates its pod disruption budget (PDB). The pods are evicted by using eviction subresource to handle PDB.

Descheduler strategies

The following descheduler strategies are available:

Low node utilization

The LowNodeUtilization strategy finds nodes that are underutilized and evicts pods, if possible, from other nodes in the hope that recreation of evicted pods will be scheduled on these underutilized nodes.

The underutilization of nodes is determined by several configurable threshold parameters: CPU, memory, and number of pods. If a node’s usage is below the configured thresholds for all parameters (CPU, memory, and number of pods), then the node is considered to be underutilized.

You can also set a target threshold for CPU, memory, and number of pods. If a node’s usage is above the configured target thresholds for any of the parameters, then the node’s pods might be considered for eviction.

Additionally, you can use the NumberOfNodes parameter to set the strategy to activate only when the number of underutilized nodes is above the configured value. This can be helpful in large clusters where a few nodes might be underutilized frequently or for a short period of time.

Duplicate pods

The RemoveDuplicates strategy ensures that there is only one pod associated with a replica set, replication controller, deployment, or job running on same node. If there are more, then those duplicate pods are evicted for better spreading of pods in a cluster.

This situation could occur after a node failure, when a pod is moved to another node, leading to more than one pod associated with a replica set, replication controller, deployment, or job on that node. After the failed node is ready again, this strategy evicts the duplicate pod.

This strategy has an optional parameter, ExcludeOwnerKinds, that allows you to specify a list of Kind types. If a pod has any of these types listed as an OwnerRef, that pod is not considered for eviction.

Violation of inter-pod anti-affinity

The RemovePodsViolatingInterPodAntiAffinity strategy ensures that pods violating inter-pod anti-affinity are removed from nodes.

This situation could occur when anti-affinity rules are created for pods that are already running on the same node.

Violation of node affinity

The RemovePodsViolatingNodeAffinity strategy ensures that pods violating node affinity are removed from nodes.

This situation could occur if a node no longer satisfies a pod’s affinity rule. If another node is available that satisfies the affinity rule, then the pod is evicted.

Violation of node taints

The RemovePodsViolatingNodeTaints strategy ensures that pods violating NoSchedule taints on nodes are removed.

This situation could occur if a pod is set to tolerate a taint key=value:NoSchedule and is running on a tainted node. If the node’s taint is updated or removed, the taint is no longer satisfied by the pod’s tolerations and the pod is evicted.

Too many restarts

The RemovePodsHavingTooManyRestarts strategy ensures that pods that have been restarted too many times are removed from nodes.

This situation could occur if a pod is scheduled on a node that is unable to start it. For example, if the node is having network issues and is unable to mount a networked persistent volume, then the pod should be evicted so that it can be scheduled on another node. Another example is if the pod is crashlooping.

This strategy has two configurable parameters: PodRestartThreshold and IncludingInitContainers. If a pod is restarted more than the configured PodRestartThreshold value, then the pod is evicted. You can use the IncludingInitContainers parameter to specify whether restarts for Init Containers should be calculated into the PodRestartThreshold value.

Pod life time

The PodLifeTime strategy evicts pods that are too old.

After a pod reaches the age, in seconds, set by the MaxPodLifeTimeSeconds parameter, it is evicted.

Installing the descheduler

The descheduler is not available by default. To enable the descheduler, you must install the Kube Descheduler Operator from OperatorHub. After the Kube Descheduler Operator is installed, you can then configure the eviction strategies.

Prerequisites

  • Cluster administrator privileges.

  • Access to the OKD web console.

  • Ensure that you have downloaded the pull secret from the Red Hat OpenShift Cluster Manager site as shown in Obtaining the installation program in the installation documentation for your platform.

    If you have the pull secret, add the redhat-operators catalog to the OperatorHub custom resource (CR) as shown in Configuring OKD to use Red Hat Operators.

Procedure

  1. Log in to the OKD web console.

  2. Create the required namespace for the Kube Descheduler Operator.

    1. Navigate to AdministrationNamespaces and click Create Namespace.

    2. Enter openshift-kube-descheduler-operator in the Name field and click Create.

  3. Install the Kube Descheduler Operator.

    1. Navigate to OperatorsOperatorHub.

    2. Type Kube Descheduler Operator into the filter box.

    3. Select the Kube Descheduler Operator and click Install.

    4. On the Install Operator page, select A specific namespace on the cluster. Select openshift-kube-descheduler-operator from the drop-down menu.

    5. Adjust the values for the Update Channel and Approval Strategy to the desired values.

    6. Click Install.

  4. Create a descheduler instance.

    1. From the OperatorsInstalled Operators page, click the Kube Descheduler Operator.

    2. Select the Kube Descheduler tab and click Create KubeDescheduler.

    3. Edit the settings as necessary and click Create.

You can now configure the strategies for the descheduler. There are no strategies enabled by default.

Configuring descheduler strategies

You can configure which strategies the descheduler uses to evict pods.

Prerequisites

  • Cluster administrator privileges.

Procedure

  1. Edit the KubeDescheduler object:

    1. $ oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator
  2. Specify one or more strategies in the spec.strategies section.

    1. apiVersion: operator.openshift.io/v1beta1
    2. kind: KubeDescheduler
    3. metadata:
    4. name: cluster
    5. namespace: openshift-kube-descheduler-operator
    6. spec:
    7. deschedulingIntervalSeconds: 3600
    8. strategies:
    9. - name: "LowNodeUtilization" (1)
    10. params:
    11. - name: "CPUThreshold"
    12. value: "10"
    13. - name: "MemoryThreshold"
    14. value: "20"
    15. - name: "PodsThreshold"
    16. value: "30"
    17. - name: "MemoryTargetThreshold"
    18. value: "40"
    19. - name: "CPUTargetThreshold"
    20. value: "50"
    21. - name: "PodsTargetThreshold"
    22. value: "60"
    23. - name: "NumberOfNodes"
    24. value: "3"
    25. - name: "RemoveDuplicates" (2)
    26. params:
    27. - name: "ExcludeOwnerKinds"
    28. value: "ReplicaSet"
    29. - name: "RemovePodsHavingTooManyRestarts" (3)
    30. params:
    31. - name: "PodRestartThreshold"
    32. value: "10"
    33. - name: "IncludingInitContainers"
    34. value: "false"
    35. - name: "RemovePodsViolatingInterPodAntiAffinity" (4)
    36. - name: "PodLifeTime" (5)
    37. params:
    38. - name: "MaxPodLifeTimeSeconds"
    39. value: "86400"
    1The LowNodeUtilization strategy provides additional parameters, such as CPUThreshold and MemoryThreshold, that you can optionally configure.
    2The RemoveDuplicates strategy provides an optional parameter, ExcludeOwnerKinds.
    3The RemovePodsHavingTooManyRestarts strategy requires the PodRestartThreshold parameter to be set. It also provides the optional IncludingInitContainers parameter.
    4The RemovePodsViolatingInterPodAntiAffinity, RemovePodsViolatingNodeAffinity, and RemovePodsViolatingNodeTaints strategies do not have any additional parameters to configure.
    5The PodLifeTime strategy requires the MaxPodLifeTimeSeconds parameter to be set.

    You can enable multiple strategies and the order that the strategies are specified in is not important.

  3. Save the file to apply the changes.

Filtering pods by namespace

You can configure whether or not pods are considered for eviction based on their namespace. Only the following descheduler strategies support namespace filtering:

  • PodLifeTime

  • RemovePodsHavingTooManyRestarts

  • RemovePodsViolatingInterPodAntiAffinity

  • RemovePodsViolatingNodeAffinity

  • RemovePodsViolatingNodeTaints

You can use the IncludeNamespaces parameter to specify which namespaces that a descheduler strategy should be run on, or you can use the ExcludeNamespaces parameter to specify which namespaces that a descheduler strategy should not be run on.

Prerequisites

  • Cluster administrator privileges.

Procedure

  1. Edit the KubeDescheduler object:

    1. $ oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator
  2. Add either the IncludeNamespaces or ExcludeNamespaces parameter to one or more strategies:

    1. apiVersion: operator.openshift.io/v1beta1
    2. kind: KubeDescheduler
    3. metadata:
    4. ...
    5. spec:
    6. deschedulingIntervalSeconds: 3600
    7. strategies:
    8. - name: "RemovePodsHavingTooManyRestarts"
    9. params:
    10. - name: "PodRestartThreshold"
    11. value: "10"
    12. - name: "IncludingInitContainers"
    13. value: "false"
    14. - name: "IncludeNamespaces" (1)
    15. value: "my-project" (2)
    16. - name: "PodLifeTime"
    17. params:
    18. - name: "MaxPodLifeTimeSeconds"
    19. value: "86400"
    20. - name: "ExcludeNamespaces" (1)
    21. value: "my-other-project" (2)
    1You cannot specify both IncludeNamespaces and ExcludeNamespaces for the same strategy.
    2Separate multiple namespaces with commas.
  3. Save the file to apply the changes.

Filtering pods by priority

You can configure descheduler strategies to consider pods for eviction only if their priority is lower than a specified priority level. Pods that are higher than the specified priority threshold are not considered for eviction.

You can use the ThresholdPriority parameter to set a numerical priority threshold, or you can use the ThresholdPriorityClassName parameter to specify a certain priority class name.

Prerequisites

  • Cluster administrator privileges.

Procedure

  1. Edit the KubeDescheduler object:

    1. $ oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator
  2. Add either the ThresholdPriority or ThresholdPriorityClassName parameter to one or more strategies:

    1. apiVersion: operator.openshift.io/v1beta1
    2. kind: KubeDescheduler
    3. metadata:
    4. ...
    5. spec:
    6. deschedulingIntervalSeconds: 3600
    7. strategies:
    8. - name: "RemovePodsHavingTooManyRestarts"
    9. params:
    10. - name: "PodRestartThreshold"
    11. value: "10"
    12. - name: "IncludingInitContainers"
    13. value: "false"
    14. - name: "ThresholdPriority" (1)
    15. value: "10000"
    16. - name: "PodLifeTime"
    17. params:
    18. - name: "MaxPodLifeTimeSeconds"
    19. value: "86400"
    20. - name: "ThresholdPriorityClassName" (1)
    21. value: "my-priority-class-name" (2)
    1You cannot specify both ThresholdPriority and ThresholdPriorityClassName for the same strategy.
    2The numerical priority value associated with this priority class name is used as the threshold. The priority class must already exist or the descheduler will throw an error.
  3. Save the file to apply the changes.

Configuring additional descheduler settings

You can configure additional settings for the descheduler, such as how frequently it runs.

Prerequisites

  • Cluster administrator privileges.

Procedure

  1. Edit the KubeDescheduler object:

    1. $ oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator
  2. Configure additional settings as necessary:

    1. apiVersion: operator.openshift.io/v1beta1
    2. kind: KubeDescheduler
    3. metadata:
    4. name: cluster
    5. namespace: openshift-kube-descheduler-operator
    6. spec:
    7. deschedulingIntervalSeconds: 3600 (1)
    8. flags:
    9. - --dry-run (2)
    10. image: quay.io/openshift/origin-descheduler:4.6 (3)
    11. ...
    1Set number of seconds between descheduler runs. A value of 0 in this field runs the descheduler once and exits.
    2Set one or more flags to append to the descheduler pod. This flag must be in the format ready to pass to the binary.
    3Set the descheduler container image to deploy.
  3. Save the file to apply the changes.

Uninstalling the descheduler

You can remove the descheduler from your cluster by removing the descheduler instance and uninstalling the Kube Descheduler Operator. This procedure also cleans up the KubeDescheduler CRD and openshift-kube-descheduler-operator namespace.

Prerequisites

  • Cluster administrator privileges.

  • Access to the OKD web console.

Procedure

  1. Log in to the OKD web console.

  2. Delete the descheduler instance.

    1. From the OperatorsInstalled Operators page, click Kube Descheduler Operator.

    2. Select the Kube Descheduler tab.

    3. Click the Options menu kebab next to the cluster entry and select Delete KubeDescheduler.

    4. In the confirmation dialog, click Delete.

  3. Uninstall the Kube Descheduler Operator.

    1. Navigate to OperatorsInstalled Operators,

    2. Click the Options menu kebab next to the Kube Descheduler Operator entry and select Uninstall Operator.

    3. In the confirmation dialog, click Uninstall.

  4. Delete the openshift-kube-descheduler-operator namespace.

    1. Navigate to AdministrationNamespaces.

    2. Enter openshift-kube-descheduler-operator into the filter box.

    3. Click the Options menu kebab next to the openshift-kube-descheduler-operator entry and select Delete Namespace.

    4. In the confirmation dialog, enter openshift-kube-descheduler-operator and click Delete.

  5. Delete the KubeDescheduler CRD.

    1. Navigate to AdministrationCustom Resource Definitions.

    2. Enter KubeDescheduler into the filter box.

    3. Click the Options menu kebab next to the KubeDescheduler entry and select Delete CustomResourceDefinition.

    4. In the confirmation dialog, click Delete.