Freeing node resources using garbage collection

As an administrator, you can use OKD to ensure that your nodes are running efficiently by freeing up resources through garbage collection.

The OKD node performs two types of garbage collection:

  • Container garbage collection: Removes terminated containers.

  • Image garbage collection: Removes images not referenced by any running pods.

Understanding how terminated containers are removed through garbage collection

Container garbage collection can be performed using eviction thresholds.

When eviction thresholds are set for garbage collection, the node tries to keep any container for any pod accessible from the API. If the pod has been deleted, the containers will be as well. Containers are preserved as long the pod is not deleted and the eviction threshold is not reached. If the node is under disk pressure, it will remove containers and their logs will no longer be accessible using oc logs.

  • eviction-soft - A soft eviction threshold pairs an eviction threshold with a required administrator-specified grace period.

  • eviction-hard - A hard eviction threshold has no grace period, and if observed, OKD takes immediate action.

The following table lists the eviction thresholds:

Table 1. Variables for configuring container garbage collection
Node conditionEviction signalDescription

MemoryPressure

memory.available

The available memory on the node.

DiskPressure

  • nodefs.available

  • nodefs.inodesFree

  • imagefs.available

  • imagefs.inodesFree

The available disk space or inodes on the node root file system, nodefs, or image file system, imagefs.

If a node is oscillating above and below a soft eviction threshold, but not exceeding its associated grace period, the corresponding node would constantly oscillate between true and false. As a consequence, the scheduler could make poor scheduling decisions.

To protect against this oscillation, use the eviction-pressure-transition-period flag to control how long OKD must wait before transitioning out of a pressure condition. OKD will not set an eviction threshold as being met for the specified pressure condition for the period specified before toggling the condition back to false.

Understanding how images are removed through garbage collection

Image garbage collection relies on disk usage as reported by cAdvisor on the node to decide which images to remove from the node.

The policy for image garbage collection is based on two conditions:

  • The percent of disk usage (expressed as an integer) which triggers image garbage collection. The default is 85.

  • The percent of disk usage (expressed as an integer) to which image garbage collection attempts to free. Default is 80.

For image garbage collection, you can modify any of the following variables using a custom resource.

Table 2. Variables for configuring image garbage collection
SettingDescription

imageMinimumGCAge

The minimum age for an unused image before the image is removed by garbage collection. The default is 2m.

imageGCHighThresholdPercent

The percent of disk usage, expressed as an integer, which triggers image garbage collection. The default is 85.

imageGCLowThresholdPercent

The percent of disk usage, expressed as an integer, to which image garbage collection attempts to free. The default is 80.

Two lists of images are retrieved in each garbage collector run:

  1. A list of images currently running in at least one pod.

  2. A list of images available on a host.

As new containers are run, new images appear. All images are marked with a time stamp. If the image is running (the first list above) or is newly detected (the second list above), it is marked with the current time. The remaining images are already marked from the previous spins. All images are then sorted by the time stamp.

Once the collection starts, the oldest images get deleted first until the stopping criterion is met.

Configuring garbage collection for containers and images

As an administrator, you can configure how OKD performs garbage collection by creating a kubeletConfig object for each machine config pool.

OKD supports only one kubeletConfig object for each machine config pool.

You can configure any combination of the following:

  • Soft eviction for containers

  • Hard eviction for containers

  • Eviction for images

Prerequisites

  1. Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure. Perform one of the following steps:

    1. View the machine config pool:

      1. $ oc describe machineconfigpool <name>

      For example:

      1. $ oc describe machineconfigpool worker

      Example output

      1. Name: worker
      2. Namespace:
      3. Labels: custom-kubelet=small-pods (1)
      1If a label has been added it appears under Labels.
    2. If the label is not present, add a key/value pair:

      1. $ oc label machineconfigpool worker custom-kubelet=small-pods

      You can alternatively apply the following YAML to add the label:

      1. apiVersion: machineconfiguration.openshift.io/v1
      2. kind: MachineConfigPool
      3. metadata:
      4. labels:
      5. custom-kubelet: small-pods
      6. name: worker

Procedure

  1. Create a custom resource (CR) for your configuration change.

    If there is one file system, or if /var/lib/kubelet and /var/lib/containers/ are in the same file system, the settings with the highest values trigger evictions, as those are met first. The file system triggers the eviction.

    Sample configuration for a container garbage collection CR:

    1. apiVersion: machineconfiguration.openshift.io/v1
    2. kind: KubeletConfig
    3. metadata:
    4. name: worker-kubeconfig (1)
    5. spec:
    6. machineConfigPoolSelector:
    7. matchLabels:
    8. custom-kubelet: small-pods (2)
    9. kubeletConfig:
    10. evictionSoft: (3)
    11. memory.available: "500Mi" (4)
    12. nodefs.available: "10%"
    13. nodefs.inodesFree: "5%"
    14. imagefs.available: "15%"
    15. imagefs.inodesFree: "10%"
    16. evictionSoftGracePeriod: (5)
    17. memory.available: "1m30s"
    18. nodefs.available: "1m30s"
    19. nodefs.inodesFree: "1m30s"
    20. imagefs.available: "1m30s"
    21. imagefs.inodesFree: "1m30s"
    22. evictionHard:
    23. memory.available: "200Mi"
    24. nodefs.available: "5%"
    25. nodefs.inodesFree: "4%"
    26. imagefs.available: "10%"
    27. imagefs.inodesFree: "5%"
    28. evictionPressureTransitionPeriod: 0s (6)
    29. imageMinimumGCAge: 5m (7)
    30. imageGCHighThresholdPercent: 80 (8)
    31. imageGCLowThresholdPercent: 75 (9)
    1Name for the object.
    2Selector label.
    3Type of eviction: EvictionSoft and EvictionHard.
    4Eviction thresholds based on a specific eviction trigger signal.
    5Grace periods for the soft eviction. This parameter does not apply to eviction-hard.
    6The duration to wait before transitioning out of an eviction pressure condition.
    7The minimum age for an unused image before the image is removed by garbage collection.
    8The percent of disk usage (expressed as an integer) which triggers image garbage collection.
    9The percent of disk usage (expressed as an integer) to which image garbage collection attempts to free.
  2. Create the object:

    1. $ oc create -f <file-name>.yaml

    For example:

    1. $ oc create -f gc-container.yaml

    Example output

    1. kubeletconfig.machineconfiguration.openshift.io/gc-container created
  3. Verify that garbage collection is active. The Machine Config Pool you specified in the custom resource appears with UPDATING as ‘true` until the change is fully implemented:

    1. $ oc get machineconfigpool

    Example output

    1. NAME CONFIG UPDATED UPDATING
    2. master rendered-master-546383f80705bd5aeaba93 True False
    3. worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True