Configuring the Linux cgroup version on your nodes

OKD uses Linux control group version 2 (cgroup v2) in your cluster.

cgroup v2 is the current version of the Linux cgroup API. cgroup v2 offers several improvements over cgroup v1, including a unified hierarchy, safer sub-tree delegation, new features such as Pressure Stall Information, and enhanced resource management and isolation. However, cgroup v2 has different CPU, memory, and I/O management characteristics than cgroup v1. Therefore, some workloads might experience slight differences in memory or CPU usage on clusters that run cgroup v2.

You can change between cgroup v1 and cgroup v2, as needed. Enabling cgroup v1 in OKD disables all cgroup v2 controllers and hierarchies in your cluster.

  • If you run third-party monitoring and security agents that depend on the cgroup file system, update the agents to a version that supports cgroup v2.

  • If you have configured cgroup v2 and run cAdvisor as a stand-alone daemon set for monitoring pods and containers, update cAdvisor to v0.43.0 or later.

  • If you deploy Java applications, use versions that fully support cgroup v2, such as the following packages:

    • OpenJDK / HotSpot: jdk8u372, 11.0.16, 15 and later

    • NodeJs 20.3.0 or later

    • IBM Semeru Runtimes: jdk8u345-b01, 11.0.16.0, 17.0.4.0, 18.0.2.0 and later

    • IBM SDK Java Technology Edition Version (IBM Java): 8.0.7.15 and later

Configuring Linux cgroup

You can enable Linux control group version 1 (cgroup v1) or Linux control group version 2 (cgroup v2) by editing the node.config object. The default is cgroup v2.

Currently, disabling CPU load balancing is not supported by cgroup v2. As a result, you might not get the desired behavior from performance profiles if you have cgroup v2 enabled. Enabling cgroup v2 is not recommended if you are using performance profiles.

Prerequisites

  • You have a running OKD cluster that uses version 4.12 or later.

  • You are logged in to the cluster as a user with administrative privileges.

Procedure

  1. Enable cgroup v1 on nodes:

    1. Edit the node.config object:

      1. $ oc edit nodes.config/cluster
    2. Edit the spec.cgroupMode parameter:

      Example node.config object

      1. apiVersion: config.openshift.io/v2
      2. kind: Node
      3. metadata:
      4. annotations:
      5. include.release.openshift.io/ibm-cloud-managed: "true"
      6. include.release.openshift.io/self-managed-high-availability: "true"
      7. include.release.openshift.io/single-node-developer: "true"
      8. release.openshift.io/create-only: "true"
      9. creationTimestamp: "2022-07-08T16:02:51Z"
      10. generation: 1
      11. name: cluster
      12. ownerReferences:
      13. - apiVersion: config.openshift.io/v2
      14. kind: ClusterVersion
      15. name: version
      16. uid: 36282574-bf9f-409e-a6cd-3032939293eb
      17. resourceVersion: "1865"
      18. uid: 0c0f7a4c-4307-4187-b591-6155695ac85b
      19. spec:
      20. cgroupMode: "v1" (1)
      21. ...
      1Specify v1 to enable cgroup v1 or v2 for cgroup v2.

Verification

  1. Check the machine configs to see that the new machine configs were added:

    1. $ oc get mc

    Example output

    1. NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE
    2. 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
    3. 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
    4. 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
    5. 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
    6. 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
    7. 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
    8. 97-master-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
    9. 99-worker-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
    10. 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
    11. 99-master-ssh 3.2.0 40m
    12. 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
    13. 99-worker-ssh 3.2.0 40m
    14. rendered-master-23d4317815a5f854bd3553d689cfe2e9 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s (1)
    15. rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
    16. rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
    17. rendered-worker-dcc7f1b92892d34db74d6832bcc9ccd4 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s
    1New machine configs are created, as expected.
  2. Check that the new kernelArguments were added to the new machine configs:

    1. $ oc describe mc <name>

    Example output for cgroup v2

    1. apiVersion: machineconfiguration.openshift.io/v2
    2. kind: MachineConfig
    3. metadata:
    4. labels:
    5. machineconfiguration.openshift.io/role: worker
    6. name: 05-worker-kernelarg-selinuxpermissive
    7. spec:
    8. kernelArguments:
    9. systemd_unified_cgroup_hierarchy=1 (1)
    10. cgroup_no_v1="all" (2)
    11. psi=1 (3)
    1Enables cgroup v2 in systemd.
    2Disables cgroup v1.
    3Enables the Linux Pressure Stall Information (PSI) feature.

    Example output for cgroup v1

    1. apiVersion: machineconfiguration.openshift.io/v2
    2. kind: MachineConfig
    3. metadata:
    4. labels:
    5. machineconfiguration.openshift.io/role: worker
    6. name: 05-worker-kernelarg-selinuxpermissive
    7. spec:
    8. kernelArguments:
    9. systemd.unified_cgroup_hierarchy=0 (1)
    10. systemd.legacy_systemd_cgroup_controller=1 (2)
    1Disables cgroup v2.
    2Enables cgroup v1 in systemd.
  3. Check the nodes to see that scheduling on the nodes is disabled. This indicates that the change is being applied:

    1. $ oc get nodes

    Example output

    1. NAME STATUS ROLES AGE VERSION
    2. ci-ln-fm1qnwt-72292-99kt6-master-0 Ready,SchedulingDisabled master 58m v1.28.5
    3. ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.28.5
    4. ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.28.5
    5. ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.28.5
    6. ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.28.5
    7. ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.28.5
  4. After a node returns to the Ready state, start a debug session for that node:

    1. $ oc debug node/<node_name>
  5. Set /host as the root directory within the debug shell:

    1. sh-4.4# chroot /host
  6. Check that the sys/fs/cgroup/cgroup2fs or sys/fs/cgroup/tmpfs file is present on your nodes:

    1. $ stat -c %T -f /sys/fs/cgroup

    Example output for cgroup v2

    1. cgroup2fs

    Example output for cgroup v1

    1. tmpfs

Additional resources