Using CPU Manager

CPU Manager manages groups of CPUs and constrains workloads to specific CPUs.

CPU Manager is useful for workloads that have some of these attributes:

  • Require as much CPU time as possible.

  • Are sensitive to processor cache misses.

  • Are low-latency network applications.

  • Coordinate with other processes and benefit from sharing a single processor cache.

Setting up CPU Manager

Procedure

  1. Optional: Label a node:

    1. # oc label node perf-node.example.com cpumanager=true
  2. Edit the MachineConfigPool of the nodes where CPU Manager should be enabled. In this example, all workers have CPU Manager enabled:

    1. # oc edit machineconfigpool worker
  3. Add a label to the worker machine config pool:

    1. metadata:
    2. creationTimestamp: 2020-xx-xxx
    3. generation: 3
    4. labels:
    5. custom-kubelet: cpumanager-enabled
  4. Create a KubeletConfig, cpumanager-kubeletconfig.yaml, custom resource (CR). Refer to the label created in the previous step to have the correct nodes updated with the new kubelet config. See the machineConfigPoolSelector section:

    1. apiVersion: machineconfiguration.openshift.io/v1
    2. kind: KubeletConfig
    3. metadata:
    4. name: cpumanager-enabled
    5. spec:
    6. machineConfigPoolSelector:
    7. matchLabels:
    8. custom-kubelet: cpumanager-enabled
    9. kubeletConfig:
    10. cpuManagerPolicy: static (1)
    11. cpuManagerReconcilePeriod: 5s (2)
    1Specify a policy:
    • none. This policy explicitly enables the existing default CPU affinity scheme, providing no affinity beyond what the scheduler does automatically.

    • static. This policy allows pods with certain resource characteristics to be granted increased CPU affinity and exclusivity on the node. If static, you must use a lowercase s.

    2Optional. Specify the CPU Manager reconcile frequency. The default is 5s.
  5. Create the dynamic kubelet config:

    1. # oc create -f cpumanager-kubeletconfig.yaml

    This adds the CPU Manager feature to the kubelet config and, if needed, the Machine Config Operator (MCO) reboots the node. To enable CPU Manager, a reboot is not needed.

  6. Check for the merged kubelet config:

    1. # oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7

    Example output

    1. "ownerReferences": [
    2. {
    3. "apiVersion": "machineconfiguration.openshift.io/v1",
    4. "kind": "KubeletConfig",
    5. "name": "cpumanager-enabled",
    6. "uid": "7ed5616d-6b72-11e9-aae1-021e1ce18878"
    7. }
    8. ]
  7. Check the worker for the updated kubelet.conf:

    1. # oc debug node/perf-node.example.com
    2. sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager

    Example output

    1. cpuManagerPolicy: static (1)
    2. cpuManagerReconcilePeriod: 5s (1)
    1These settings were defined when you created the KubeletConfig CR.
  8. Create a pod that requests a core or multiple cores. Both limits and requests must have their CPU value set to a whole integer. That is the number of cores that will be dedicated to this pod:

    1. # cat cpumanager-pod.yaml

    Example output

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. generateName: cpumanager-
    5. spec:
    6. containers:
    7. - name: cpumanager
    8. image: gcr.io/google_containers/pause-amd64:3.0
    9. resources:
    10. requests:
    11. cpu: 1
    12. memory: "1G"
    13. limits:
    14. cpu: 1
    15. memory: "1G"
    16. nodeSelector:
    17. cpumanager: "true"
  9. Create the pod:

    1. # oc create -f cpumanager-pod.yaml
  10. Verify that the pod is scheduled to the node that you labeled:

    1. # oc describe pod cpumanager

    Example output

    1. Name: cpumanager-6cqz7
    2. Namespace: default
    3. Priority: 0
    4. PriorityClassName: <none>
    5. Node: perf-node.example.com/xxx.xx.xx.xxx
    6. ...
    7. Limits:
    8. cpu: 1
    9. memory: 1G
    10. Requests:
    11. cpu: 1
    12. memory: 1G
    13. ...
    14. QoS Class: Guaranteed
    15. Node-Selectors: cpumanager=true
  11. Verify that the cgroups are set up correctly. Get the process ID (PID) of the pause process:

    1. # ├─init.scope
    2. └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17
    3. └─kubepods.slice
    4. ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice
    5. ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope
    6. └─32706 /pause

    Pods of quality of service (QoS) tier Guaranteed are placed within the kubepods.slice. Pods of other QoS tiers end up in child cgroups of kubepods:

    1. # cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope
    2. # for i in `ls cpuset.cpus tasks` ; do echo -n "$i "; cat $i ; done

    Example output

    1. cpuset.cpus 1
    2. tasks 32706
  12. Check the allowed CPU list for the task:

    1. # grep ^Cpus_allowed_list /proc/32706/status

    Example output

    1. Cpus_allowed_list: 1
  13. Verify that another pod (in this case, the pod in the burstable QoS tier) on the system cannot run on the core allocated for the Guaranteed pod:

    1. # cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus
    2. 0
    3. # oc describe node perf-node.example.com

    Example output

    1. ...
    2. Capacity:
    3. attachable-volumes-aws-ebs: 39
    4. cpu: 2
    5. ephemeral-storage: 124768236Ki
    6. hugepages-1Gi: 0
    7. hugepages-2Mi: 0
    8. memory: 8162900Ki
    9. pods: 250
    10. Allocatable:
    11. attachable-volumes-aws-ebs: 39
    12. cpu: 1500m
    13. ephemeral-storage: 124768236Ki
    14. hugepages-1Gi: 0
    15. hugepages-2Mi: 0
    16. memory: 7548500Ki
    17. pods: 250
    18. ------- ---- ------------ ---------- --------------- ------------- ---
    19. default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m
    20. Allocated resources:
    21. (Total limits may be over 100 percent, i.e., overcommitted.)
    22. Resource Requests Limits
    23. -------- -------- ------
    24. cpu 1440m (96%) 1 (66%)

    This VM has two CPU cores. The system-reserved setting reserves 500 millicores, meaning that half of one core is subtracted from the total capacity of the node to arrive at the Node Allocatable amount. You can see that Allocatable CPU is 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a second pod, the system will accept the pod, but it will never be scheduled:

    1. NAME READY STATUS RESTARTS AGE
    2. cpumanager-6cqz7 1/1 Running 0 33m
    3. cpumanager-7qc2t 0/1 Pending 0 11s