Using the Node Tuning Operator

Learn about the Node Tuning Operator and how you can use it to manage node-level tuning by orchestrating the tuned daemon.

About the Node Tuning Operator

The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs.

The Operator manages the containerized TuneD daemon for OKD as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node.

Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal.

The Node Tuning Operator is part of a standard OKD installation in version 4.1 and later.

Accessing an example Node Tuning Operator specification

Use this process to access an example Node Tuning Operator specification.

Procedure

  1. Run:

    1. $ oc get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator

The default CR is meant for delivering standard node-level tuning for the OKD platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to OKD nodes based on node or pod labels and profile priorities.

While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality might be deprecated in future versions of the Node Tuning Operator.

Default profiles set on a cluster

The following are the default profiles set on a cluster.

  1. apiVersion: tuned.openshift.io/v1
  2. kind: Tuned
  3. metadata:
  4. name: default
  5. namespace: openshift-cluster-node-tuning-operator
  6. spec:
  7. profile:
  8. - name: "openshift"
  9. data: |
  10. [main]
  11. summary=Optimize systems running OpenShift (parent profile)
  12. include=${f:virt_check:virtual-guest:throughput-performance}
  13. [selinux]
  14. avc_cache_threshold=8192
  15. [net]
  16. nf_conntrack_hashsize=131072
  17. [sysctl]
  18. net.ipv4.ip_forward=1
  19. kernel.pid_max=>4194304
  20. net.netfilter.nf_conntrack_max=1048576
  21. net.ipv4.conf.all.arp_announce=2
  22. net.ipv4.neigh.default.gc_thresh1=8192
  23. net.ipv4.neigh.default.gc_thresh2=32768
  24. net.ipv4.neigh.default.gc_thresh3=65536
  25. net.ipv6.neigh.default.gc_thresh1=8192
  26. net.ipv6.neigh.default.gc_thresh2=32768
  27. net.ipv6.neigh.default.gc_thresh3=65536
  28. vm.max_map_count=262144
  29. [sysfs]
  30. /sys/module/nvme_core/parameters/io_timeout=4294967295
  31. /sys/module/nvme_core/parameters/max_retries=10
  32. - name: "openshift-control-plane"
  33. data: |
  34. [main]
  35. summary=Optimize systems running OpenShift control plane
  36. include=openshift
  37. [sysctl]
  38. # ktune sysctl settings, maximizing i/o throughput
  39. #
  40. # Minimal preemption granularity for CPU-bound tasks:
  41. # (default: 1 msec# (1 + ilog(ncpus)), units: nanoseconds)
  42. kernel.sched_min_granularity_ns=10000000
  43. # The total time the scheduler will consider a migrated process
  44. # "cache hot" and thus less likely to be re-migrated
  45. # (system default is 500000, i.e. 0.5 ms)
  46. kernel.sched_migration_cost_ns=5000000
  47. # SCHED_OTHER wake-up granularity.
  48. #
  49. # Preemption granularity when tasks wake up. Lower the value to
  50. # improve wake-up latency and throughput for latency critical tasks.
  51. kernel.sched_wakeup_granularity_ns=4000000
  52. - name: "openshift-node"
  53. data: |
  54. [main]
  55. summary=Optimize systems running OpenShift nodes
  56. include=openshift
  57. [sysctl]
  58. net.ipv4.tcp_fastopen=3
  59. fs.inotify.max_user_watches=65536
  60. fs.inotify.max_user_instances=8192
  61. recommend:
  62. - profile: "openshift-control-plane"
  63. priority: 30
  64. match:
  65. - label: "node-role.kubernetes.io/master"
  66. - label: "node-role.kubernetes.io/infra"
  67. - profile: "openshift-node"
  68. priority: 40

Verifying that the TuneD profiles are applied

Starting with OKD 4.8, it is no longer necessary to check the TuneD pod logs to find which TuneD profiles are applied on cluster nodes.

  1. $ oc get profile -n openshift-cluster-node-tuning-operator

Example output

  1. NAME TUNED APPLIED DEGRADED AGE
  2. master-0 openshift-control-plane True False 6h33m
  3. master-1 openshift-control-plane True False 6h33m
  4. master-2 openshift-control-plane True False 6h33m
  5. worker-a openshift-node True False 6h28m
  6. worker-b openshift-node True False 6h28m
  • NAME: Name of the Profile object. There is one Profile object per node and their names match.

  • TUNED: Name of the desired TuneD profile to apply.

  • APPLIED: True if the TuneD daemon applied the desired profile. (True/False/Unknown).

  • DEGRADED: True if any errors were reported during application of the TuneD profile (True/False/Unknown).

  • AGE: Time elapsed since the creation of Profile object.

Custom tuning specification

The custom resource (CR) for the Operator has two major sections. The first section, profile:, is a list of TuneD profiles and their names. The second, recommend:, defines the profile selection logic.

Multiple custom tuning specifications can co-exist as multiple CRs in the Operator’s namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized TuneD daemons are updated.

Management state

The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows:

  • Managed: the Operator will update its operands as configuration resources are updated

  • Unmanaged: the Operator will ignore changes to the configuration resources

  • Removed: the Operator will remove its operands and resources the Operator provisioned

Profile data

The profile: section lists TuneD profiles and their names.

  1. profile:
  2. - name: tuned_profile_1
  3. data: |
  4. # TuneD profile specification
  5. [main]
  6. summary=Description of tuned_profile_1 profile
  7. [sysctl]
  8. net.ipv4.ip_forward=1
  9. # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD
  10. # ...
  11. - name: tuned_profile_n
  12. data: |
  13. # TuneD profile specification
  14. [main]
  15. summary=Description of tuned_profile_n profile
  16. # tuned_profile_n profile settings

Recommended profiles

The profile: selection logic is defined by the recommend: section of the CR. The recommend: section is a list of items to recommend the profiles based on a selection criteria.

  1. recommend:
  2. <recommend-item-1>
  3. # ...
  4. <recommend-item-n>

The individual items of the list:

  1. - machineConfigLabels: (1)
  2. <mcLabels> (2)
  3. match: (3)
  4. <match> (4)
  5. priority: <priority> (5)
  6. profile: <tuned_profile_name> (6)
1Optional.
2A dictionary of key/value MachineConfig labels. The keys must be unique.
3If omitted, profile match is assumed unless a profile with a higher priority matches first or machineConfigLabels is set.
4An optional list.
5Profile ordering priority. Lower numbers mean higher priority (0 is the highest priority).
6A TuneD profile to apply on a match. For example tuned_profile_1.

<match> is an optional list recursively defined as follows:

  1. - label: <label_name> (1)
  2. value: <label_value> (2)
  3. type: <label_type> (3)
  4. <match> (4)
1Node or pod label name.
2Optional node or pod label value. If omitted, the presence of <label_name> is enough to match.
3Optional object type (node or pod). If omitted, node is assumed.
4An optional <match> list.

If <match> is not omitted, all nested <match> sections must also evaluate to true. Otherwise, false is assumed and the profile with the respective <match> section will not be applied or recommended. Therefore, the nesting (child <match> sections) works as logical AND operator. Conversely, if any item of the <match> list matches, the entire <match> list evaluates to true. Therefore, the list acts as logical OR operator.

If machineConfigLabels is defined, machine config pool based matching is turned on for the given recommend: list item. <mcLabels> specifies the labels for a machine config. The machine config is created automatically to apply host settings, such as kernel boot parameters, for the profile <tuned_profile_name>. This involves finding all machine config pools with machine config selector matching <mcLabels> and setting the profile <tuned_profile_name> on all nodes that are assigned the found machine config pools. To target nodes that have both master and worker roles, you must use the master role.

The list items match and machineConfigLabels are connected by the logical OR operator. The match item is evaluated first in a short-circuit manner. Therefore, if it evaluates to true, the machineConfigLabels item is not considered.

When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in TuneD operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool.

Example: node or pod label based matching

  1. - match:
  2. - label: tuned.openshift.io/elasticsearch
  3. match:
  4. - label: node-role.kubernetes.io/master
  5. - label: node-role.kubernetes.io/infra
  6. type: pod
  7. priority: 10
  8. profile: openshift-control-plane-es
  9. - match:
  10. - label: node-role.kubernetes.io/master
  11. - label: node-role.kubernetes.io/infra
  12. priority: 20
  13. profile: openshift-control-plane
  14. - priority: 30
  15. profile: openshift-node

The CR above is translated for the containerized TuneD daemon into its recommend.conf file based on the profile priorities. The profile with the highest priority (10) is openshift-control-plane-es and, therefore, it is considered first. The containerized TuneD daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch label set. If not, the entire <match> section evaluates as false. If there is such a pod with the label, in order for the <match> section to evaluate to true, the node label also needs to be node-role.kubernetes.io/master or node-role.kubernetes.io/infra.

If the labels for the profile with priority 10 matched, openshift-control-plane-es profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile (openshift-control-plane) is considered. This profile is applied if the containerized TuneD pod runs on a node with labels node-role.kubernetes.io/master or node-role.kubernetes.io/infra.

Finally, the profile openshift-node has the lowest priority of 30. It lacks the <match> section and, therefore, will always match. It acts as a profile catch-all to set openshift-node profile, if no other profile with higher priority matches on a given node.

Decision workflow

Example: machine config pool based matching

  1. apiVersion: tuned.openshift.io/v1
  2. kind: Tuned
  3. metadata:
  4. name: openshift-node-custom
  5. namespace: openshift-cluster-node-tuning-operator
  6. spec:
  7. profile:
  8. - data: |
  9. [main]
  10. summary=Custom OpenShift node profile with an additional kernel parameter
  11. include=openshift-node
  12. [bootloader]
  13. cmdline_openshift_node_custom=+skew_tick=1
  14. name: openshift-node-custom
  15. recommend:
  16. - machineConfigLabels:
  17. machineconfiguration.openshift.io/role: "worker-custom"
  18. priority: 20
  19. profile: openshift-node-custom

To minimize node reboots, label the target nodes with a label the machine config pool’s node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself.

Custom tuning examples

Using TuneD profiles from the default CR

The following CR applies custom node-level tuning for OKD nodes with label tuned.openshift.io/ingress-node-label set to any value.

Example: custom tuning using the openshift-control-plane TuneD profile

  1. apiVersion: tuned.openshift.io/v1
  2. kind: Tuned
  3. metadata:
  4. name: ingress
  5. namespace: openshift-cluster-node-tuning-operator
  6. spec:
  7. profile:
  8. - data: |
  9. [main]
  10. summary=A custom OpenShift ingress profile
  11. include=openshift-control-plane
  12. [sysctl]
  13. net.ipv4.ip_local_port_range="1024 65535"
  14. net.ipv4.tcp_tw_reuse=1
  15. name: openshift-ingress
  16. recommend:
  17. - match:
  18. - label: tuned.openshift.io/ingress-node-label
  19. priority: 10
  20. profile: openshift-ingress

Custom profile writers are strongly encouraged to include the default TuneD daemon profiles shipped within the default Tuned CR. The example above uses the default openshift-control-plane profile to accomplish this.

Using built-in TuneD profiles

Given the successful rollout of the NTO-managed daemon set, the TuneD operands all manage the same version of the TuneD daemon. To list the built-in TuneD profiles supported by the daemon, query any TuneD pod in the following way:

  1. $ oc exec $tuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/ -name tuned.conf -printf '%h\n' | sed 's|^.*/||'

You can use the profile names retrieved by this in your custom tuning specification.

Example: using built-in hpc-compute TuneD profile

  1. apiVersion: tuned.openshift.io/v1
  2. kind: Tuned
  3. metadata:
  4. name: openshift-node-hpc-compute
  5. namespace: openshift-cluster-node-tuning-operator
  6. spec:
  7. profile:
  8. - data: |
  9. [main]
  10. summary=Custom OpenShift node profile for HPC compute workloads
  11. include=openshift-node,hpc-compute
  12. name: openshift-node-hpc-compute
  13. recommend:
  14. - match:
  15. - label: tuned.openshift.io/openshift-node-hpc-compute
  16. priority: 20
  17. profile: openshift-node-hpc-compute

In addition to the built-in hpc-compute profile, the example above includes the openshift-node TuneD daemon profile shipped within the default Tuned CR to use OpenShift-specific tuning for compute nodes.

Supported TuneD daemon plug-ins

Excluding the [main] section, the following TuneD plug-ins are supported when using custom profiles defined in the profile: section of the Tuned CR:

  • audio

  • cpu

  • disk

  • eeepc_she

  • modules

  • mounts

  • net

  • scheduler

  • scsi_host

  • selinux

  • sysctl

  • sysfs

  • usb

  • video

  • vm

There is some dynamic tuning functionality provided by some of these plug-ins that is not supported. The following TuneD plug-ins are currently not supported:

  • bootloader

  • script

  • systemd

See Available TuneD Plug-ins and Getting Started with TuneD for more information.