Creating a performance profile

Learn about the Performance Profile Creator (PPC) and how you can use it to create a performance profile.

Currently, disabling CPU load balancing is not supported by cgroup v2. As a result, you might not get the desired behavior from performance profiles if you have cgroup v2 enabled. Enabling cgroup v2 is not recommended if you are using performance profiles.

About the Performance Profile Creator

The Performance Profile Creator (PPC) is a command-line tool, delivered with the Node Tuning Operator, used to create the performance profile. The tool consumes must-gather data from the cluster and several user-supplied profile arguments. The PPC generates a performance profile that is appropriate for your hardware and topology.

The tool is run by one of the following methods:

  • Invoking podman

  • Calling a wrapper script

Gathering data about your cluster using the must-gather command

The Performance Profile Creator (PPC) tool requires must-gather data. As a cluster administrator, run the must-gather command to capture information about your cluster.

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.

  • The OpenShift CLI (oc) installed.

Procedure

  1. Optional: Verify that a matching machine config pool exists with a label:

    1. $ oc describe mcp/worker-rt

    Example output

    1. Name: worker-rt
    2. Namespace:
    3. Labels: machineconfiguration.openshift.io/role=worker-rt
  2. If a matching label does not exist add a label for a machine config pool (MCP) that matches with the MCP name:

    1. $ oc label mcp <mcp_name> <mcp_name>=""
  3. Navigate to the directory where you want to store the must-gather data.

  4. Collect cluster information by running the following command:

    1. $ oc adm must-gather
  5. Optional: Create a compressed file from the must-gather directory:

    1. $ tar cvaf must-gather.tar.gz must-gather/

    Compressed output is required if you are running the Performance Profile Creator wrapper script.

Running the Performance Profile Creator using podman

As a cluster administrator, you can run podman and the Performance Profile Creator to create a performance profile.

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.

  • A cluster installed on bare-metal hardware.

  • A node with podman and OpenShift CLI (oc) installed.

  • Access to the Node Tuning Operator image.

Procedure

  1. Check the machine config pool:

    1. $ oc get mcp

    Example output

    1. NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
    2. master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h
    3. worker-cnf rendered-worker-cnf-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h
  2. Use Podman to authenticate to registry.redhat.io:

    1. $ podman login registry.redhat.io
    1. Username: <username>
    2. Password: <password>
  3. Optional: Display help for the PPC tool:

    1. $ podman run --rm --entrypoint performance-profile-creator registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.14 -h

    Example output

    1. A tool that automates creation of Performance Profiles
    2. Usage:
    3. performance-profile-creator [flags]
    4. Flags:
    5. --disable-ht Disable Hyperthreading
    6. -h, --help help for performance-profile-creator
    7. --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default "log")
    8. --mcp-name string MCP name corresponding to the target machines (required)
    9. --must-gather-dir-path string Must gather directory path (default "must-gather")
    10. --offlined-cpu-count int Number of offlined CPUs
    11. --per-pod-power-management Enable Per Pod Power Management
    12. --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default "default")
    13. --profile-name string Name of the performance profile to be created (default "performance")
    14. --reserved-cpu-count int Number of reserved CPUs (required)
    15. --rt-kernel Enable Real Time Kernel (required)
    16. --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes
    17. --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default "restricted")
    18. --user-level-networking Run with User level Networking(DPDK) enabled
  4. Run the Performance Profile Creator tool in discovery mode:

    Discovery mode inspects your cluster using the output from must-gather. The output produced includes information on:

    • The NUMA cell partitioning with the allocated CPU ids

    • Whether hyperthreading is enabled

    Using this information you can set appropriate values for some of the arguments supplied to the Performance Profile Creator tool.

    1. $ podman run --entrypoint performance-profile-creator -v <path_to_must-gather>/must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.14 --info log --must-gather-dir-path /must-gather

    This command uses the performance profile creator as a new entry point to podman. It maps the must-gather data for the host into the container image and invokes the required user-supplied profile arguments to produce the my-performance-profile.yaml file.

    The -v option can be the path to either:

    • The must-gather output directory

    • An existing directory containing the must-gather decompressed tarball

    The info option requires a value which specifies the output format. Possible values are log and JSON. The JSON format is reserved for debugging.

  5. Run podman:

    1. $ podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.14 --mcp-name=worker-cnf --reserved-cpu-count=4 --rt-kernel=true --split-reserved-cpus-across-numa=false --must-gather-dir-path /must-gather --power-consumption-mode=ultra-low-latency --offlined-cpu-count=6 > my-performance-profile.yaml

    The Performance Profile Creator arguments are shown in the Performance Profile Creator arguments table. The following arguments are required:

    • reserved-cpu-count

    • mcp-name

    • rt-kernel

    The mcp-name argument in this example is set to worker-cnf based on the output of the command oc get mcp. For single-node OpenShift use —mcp-name=master.

  6. Review the created YAML file:

    1. $ cat my-performance-profile.yaml

    Example output

    1. apiVersion: performance.openshift.io/v2
    2. kind: PerformanceProfile
    3. metadata:
    4. name: performance
    5. spec:
    6. cpu:
    7. isolated: 2-39,48-79
    8. offlined: 42-47
    9. reserved: 0-1,40-41
    10. machineConfigPoolSelector:
    11. machineconfiguration.openshift.io/role: worker-cnf
    12. nodeSelector:
    13. node-role.kubernetes.io/worker-cnf: ""
    14. numa:
    15. topologyPolicy: restricted
    16. realTimeKernel:
    17. enabled: true
    18. workloadHints:
    19. highPowerConsumption: true
    20. realTime: true
  7. Apply the generated profile:

    1. $ oc apply -f my-performance-profile.yaml

How to run podman to create a performance profile

The following example illustrates how to run podman to create a performance profile with 20 reserved CPUs that are to be split across the NUMA nodes.

Node hardware configuration:

  • 80 CPUs

  • Hyperthreading enabled

  • Two NUMA nodes

  • Even numbered CPUs run on NUMA node 0 and odd numbered CPUs run on NUMA node 1

Run podman to create the performance profile:

  1. $ podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.14 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=true --must-gather-dir-path /must-gather > my-performance-profile.yaml

The created profile is described in the following YAML:

  1. apiVersion: performance.openshift.io/v2
  2. kind: PerformanceProfile
  3. metadata:
  4. name: performance
  5. spec:
  6. cpu:
  7. isolated: 10-39,50-79
  8. reserved: 0-9,40-49
  9. nodeSelector:
  10. node-role.kubernetes.io/worker-cnf: ""
  11. numa:
  12. topologyPolicy: restricted
  13. realTimeKernel:
  14. enabled: true

In this case, 10 CPUs are reserved on NUMA node 0 and 10 are reserved on NUMA node 1.

Running the Performance Profile Creator wrapper script

The performance profile wrapper script simplifies the running of the Performance Profile Creator (PPC) tool. It hides the complexities associated with running podman and specifying the mapping directories and it enables the creation of the performance profile.

Prerequisites

  • Access to the Node Tuning Operator image.

  • Access to the must-gather tarball.

Procedure

  1. Create a file on your local machine named, for example, run-perf-profile-creator.sh:

    1. $ vi run-perf-profile-creator.sh
  2. Paste the following code into the file:

    1. #!/bin/bash
    2. readonly CONTAINER_RUNTIME=${CONTAINER_RUNTIME:-podman}
    3. readonly CURRENT_SCRIPT=$(basename "$0")
    4. readonly CMD="${CONTAINER_RUNTIME} run --entrypoint performance-profile-creator"
    5. readonly IMG_EXISTS_CMD="${CONTAINER_RUNTIME} image exists"
    6. readonly IMG_PULL_CMD="${CONTAINER_RUNTIME} image pull"
    7. readonly MUST_GATHER_VOL="/must-gather"
    8. NTO_IMG="registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.14"
    9. MG_TARBALL=""
    10. DATA_DIR=""
    11. usage() {
    12. print "Wrapper usage:"
    13. print " ${CURRENT_SCRIPT} [-h] [-p image][-t path] -- [performance-profile-creator flags]"
    14. print ""
    15. print "Options:"
    16. print " -h help for ${CURRENT_SCRIPT}"
    17. print " -p Node Tuning Operator image"
    18. print " -t path to a must-gather tarball"
    19. ${IMG_EXISTS_CMD} "${NTO_IMG}" && ${CMD} "${NTO_IMG}" -h
    20. }
    21. function cleanup {
    22. [ -d "${DATA_DIR}" ] && rm -rf "${DATA_DIR}"
    23. }
    24. trap cleanup EXIT
    25. exit_error() {
    26. print "error: $*"
    27. usage
    28. exit 1
    29. }
    30. print() {
    31. echo "$*" >&2
    32. }
    33. check_requirements() {
    34. ${IMG_EXISTS_CMD} "${NTO_IMG}" || ${IMG_PULL_CMD} "${NTO_IMG}" || \
    35. exit_error "Node Tuning Operator image not found"
    36. [ -n "${MG_TARBALL}" ] || exit_error "Must-gather tarball file path is mandatory"
    37. [ -f "${MG_TARBALL}" ] || exit_error "Must-gather tarball file not found"
    38. DATA_DIR=$(mktemp -d -t "${CURRENT_SCRIPT}XXXX") || exit_error "Cannot create the data directory"
    39. tar -zxf "${MG_TARBALL}" --directory "${DATA_DIR}" || exit_error "Cannot decompress the must-gather tarball"
    40. chmod a+rx "${DATA_DIR}"
    41. return 0
    42. }
    43. main() {
    44. while getopts ':hp:t:' OPT; do
    45. case "${OPT}" in
    46. h)
    47. usage
    48. exit 0
    49. ;;
    50. p)
    51. NTO_IMG="${OPTARG}"
    52. ;;
    53. t)
    54. MG_TARBALL="${OPTARG}"
    55. ;;
    56. ?)
    57. exit_error "invalid argument: ${OPTARG}"
    58. ;;
    59. esac
    60. done
    61. shift $((OPTIND - 1))
    62. check_requirements || exit 1
    63. ${CMD} -v "${DATA_DIR}:${MUST_GATHER_VOL}:z" "${NTO_IMG}" "$@" --must-gather-dir-path "${MUST_GATHER_VOL}"
    64. echo "" 1>&2
    65. }
    66. main "$@"
  3. Add execute permissions for everyone on this script:

    1. $ chmod a+x run-perf-profile-creator.sh
  4. Optional: Display the run-perf-profile-creator.sh command usage:

    1. $ ./run-perf-profile-creator.sh -h

    Expected output

    1. Wrapper usage:
    2. run-perf-profile-creator.sh [-h] [-p image][-t path] -- [performance-profile-creator flags]
    3. Options:
    4. -h help for run-perf-profile-creator.sh
    5. -p Node Tuning Operator image (1)
    6. -t path to a must-gather tarball (2)
    7. A tool that automates creation of Performance Profiles
    8. Usage:
    9. performance-profile-creator [flags]
    10. Flags:
    11. --disable-ht Disable Hyperthreading
    12. -h, --help help for performance-profile-creator
    13. --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default "log")
    14. --mcp-name string MCP name corresponding to the target machines (required)
    15. --must-gather-dir-path string Must gather directory path (default "must-gather")
    16. --offlined-cpu-count int Number of offlined CPUs
    17. --per-pod-power-management Enable Per Pod Power Management
    18. --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default "default")
    19. --profile-name string Name of the performance profile to be created (default "performance")
    20. --reserved-cpu-count int Number of reserved CPUs (required)
    21. --rt-kernel Enable Real Time Kernel (required)
    22. --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes
    23. --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default "restricted")
    24. --user-level-networking Run with User level Networking(DPDK) enabled

    There two types of arguments:

    • Wrapper arguments namely -h, -p and -t

    • PPC arguments

    1Optional: Specify the Node Tuning Operator image. If not set, the default upstream image is used: registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.14.
    2-t is a required wrapper script argument and specifies the path to a must-gather tarball.
  5. Run the performance profile creator tool in discovery mode:

    Discovery mode inspects your cluster using the output from must-gather. The output produced includes information on:

    • The NUMA cell partitioning with the allocated CPU IDs

    • Whether hyperthreading is enabled

    Using this information you can set appropriate values for some of the arguments supplied to the Performance Profile Creator tool.

    1. $ ./run-perf-profile-creator.sh -t /must-gather/must-gather.tar.gz -- --info=log

    The info option requires a value which specifies the output format. Possible values are log and JSON. The JSON format is reserved for debugging.

  6. Check the machine config pool:

    1. $ oc get mcp

    Example output

    1. NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
    2. master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h
    3. worker-cnf rendered-worker-cnf-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h
  7. Create a performance profile:

    1. $ ./run-perf-profile-creator.sh -t /must-gather/must-gather.tar.gz -- --mcp-name=worker-cnf --reserved-cpu-count=2 --rt-kernel=true > my-performance-profile.yaml

    The Performance Profile Creator arguments are shown in the Performance Profile Creator arguments table. The following arguments are required:

    • reserved-cpu-count

    • mcp-name

    • rt-kernel

    The mcp-name argument in this example is set to worker-cnf based on the output of the command oc get mcp. For single-node OpenShift use —mcp-name=master.

  8. Review the created YAML file:

    1. $ cat my-performance-profile.yaml

    Example output

    1. apiVersion: performance.openshift.io/v2
    2. kind: PerformanceProfile
    3. metadata:
    4. name: performance
    5. spec:
    6. cpu:
    7. isolated: 1-39,41-79
    8. reserved: 0,40
    9. nodeSelector:
    10. node-role.kubernetes.io/worker-cnf: ""
    11. numa:
    12. topologyPolicy: restricted
    13. realTimeKernel:
    14. enabled: false
  9. Apply the generated profile:

    Install the Node Tuning Operator before applying the profile.

    1. $ oc apply -f my-performance-profile.yaml

Performance Profile Creator arguments

Table 1. Performance Profile Creator arguments
ArgumentDescription

disable-ht

Disable hyperthreading.

Possible values: true or false.

Default: false.

If this argument is set to true you should not disable hyperthreading in the BIOS. Disabling hyperthreading is accomplished with a kernel command line argument.

info

This captures cluster information and is used in discovery mode only. Discovery mode also requires the must-gather-dir-path argument. If any other arguments are set they are ignored.

Possible values:

  • log

  • JSON

    These options define the output format with the JSON format being reserved for debugging.

Default: log.

mcp-name

MCP name for example worker-cnf corresponding to the target machines. This parameter is required.

must-gather-dir-path

Must gather directory path. This parameter is required.

When the user runs the tool with the wrapper script must-gather is supplied by the script itself and the user must not specify it.

offlined-cpu-count

Number of offlined CPUs.

This must be a natural number greater than 0. If not enough logical processors are offlined then error messages are logged. The messages are:

  1. Error: failed to compute the reserved and isolated CPUs: please ensure that reserved-cpu-count plus offlined-cpu-count should be in the range [0,1]
  1. Error: failed to compute the reserved and isolated CPUs: please specify the offlined CPU count in the range [0,1]

power-consumption-mode

The power consumption mode.

Possible values:

  • default: CPU partitioning with enabled power management and basic low-latency.

  • low-latency: Enhanced measures to improve latency figures.

  • ultra-low-latency: Priority given to optimal latency, at the expense of power management.

Default: default.

per-pod-power-management

Enable per pod power management. You cannot use this argument if you configured ultra-low-latency as the power consumption mode.

Possible values: true or false.

Default: false.

profile-name

Name of the performance profile to create. Default: performance.

reserved-cpu-count

Number of reserved CPUs. This parameter is required.

This must be a natural number. A value of 0 is not allowed.

rt-kernel

Enable real-time kernel. This parameter is required.

Possible values: true or false.

split-reserved-cpus-across-numa

Split the reserved CPUs across NUMA nodes.

Possible values: true or false.

Default: false.

topology-manager-policy

Kubelet Topology Manager policy of the performance profile to be created.

Possible values:

  • single-numa-node

  • best-effort

  • restricted

Default: restricted.

user-level-networking

Run with user level networking (DPDK) enabled.

Possible values: true or false.

Default: false.

Reference performance profiles

A performance profile template for clusters that use OVS-DPDK on OpenStack

To maximize machine performance in a cluster that uses Open vSwitch with the Data Plane Development Kit (OVS-DPDK) on OpenStack, you can use a performance profile.

You can use the following performance profile template to create a profile for your deployment.

A performance profile template for clusters that use OVS-DPDK

  1. apiVersion: performance.openshift.io/v2
  2. kind: PerformanceProfile
  3. metadata:
  4. name: cnf-performanceprofile
  5. spec:
  6. additionalKernelArgs:
  7. - nmi_watchdog=0
  8. - audit=0
  9. - mce=off
  10. - processor.max_cstate=1
  11. - idle=poll
  12. - intel_idle.max_cstate=0
  13. - default_hugepagesz=1GB
  14. - hugepagesz=1G
  15. - intel_iommu=on
  16. cpu:
  17. isolated: <CPU_ISOLATED>
  18. reserved: <CPU_RESERVED>
  19. hugepages:
  20. defaultHugepagesSize: 1G
  21. pages:
  22. - count: <HUGEPAGES_COUNT>
  23. node: 0
  24. size: 1G
  25. nodeSelector:
  26. node-role.kubernetes.io/worker: ''
  27. realTimeKernel:
  28. enabled: false
  29. globallyDisableIrqLoadBalancing: true

Insert values that are appropriate for your configuration for the CPU_ISOLATED, CPU_RESERVED, and HUGEPAGES_COUNT keys.

To learn how to create and use performance profiles, see the “Creating a performance profile” page in the “Scalability and performance” section of the OKD documentation.

Additional resources