Performance Addon Operator for low latency nodes

Understanding low latency

The emergence of Edge computing in the area of Telco / 5G plays a key role in reducing latency and congestion problems and improving application performance.

Simply put, latency determines how fast data (packets) moves from the sender to receiver and returns to the sender after processing by the receiver. Obviously, maintaining a network architecture with the lowest possible delay of latency speeds is key for meeting the network performance requirements of 5G. Compared to 4G technology, with an average latency of 50ms, 5G is targeted to reach latency numbers of 1ms or less. This reduction in latency boosts wireless throughput by a factor of 10.

Many of the deployed applications in the Telco space require low latency that can only tolerate zero packet loss. Tuning for zero packet loss helps mitigate the inherent issues that degrade network performance. For more information, see Tuning for Zero Packet Loss in Red Hat OpenStack Platform (RHOSP).

The Edge computing initiative also comes in to play for reducing latency rates. Think of it as literally being on the edge of the cloud and closer to the user. This greatly reduces the distance between the user and distant data centers, resulting in reduced application response times and performance latency.

Administrators must be able to manage their many Edge sites and local services in a centralized way so that all of the deployments can run at the lowest possible management cost. They also need an easy way to deploy and configure certain nodes of their cluster for real-time low latency and high-performance purposes. Low latency nodes are useful for applications such as Cloud-native Network Functions (CNF) and Data Plane Development Kit (DPDK).

OKD currently provides mechanisms to tune software on an OKD cluster for real-time running and low latency (around <20 microseconds reaction time). This includes tuning the kernel and OKD set values, installing a kernel, and reconfiguring the machine. But this method requires setting up four different Operators and performing many configurations that, when done manually, is complex and could be prone to mistakes.

OKD provides a Performance Addon Operator to implement automatic tuning to achieve low latency performance for OpenShift applications. The cluster administrator uses this performance profile configuration that makes it easier to make these changes in a more reliable way. The administrator can specify whether to update the kernel to kernel-rt, reserve CPUs for cluster and operating system housekeeping duties, including pod infra containers, and isolate CPUs for application containers to run the workloads.

About hyperthreading for low latency and real-time applications

Hyperthreading is an Intel processor technology that allows a physical CPU processor core to function as two logical cores, executing two independent threads simultaneously. Hyperthreading allows for better system throughput for certain workload types where parallel processing is beneficial. The default OKD configuration expects hyperthreading to be enabled by default.

For telecommunications applications, it is important to design your application infrastructure to minimize latency as much as possible. Hyperthreading can slow performance times and negatively affect throughput for compute intensive workloads that require low latency. Disabling hyperthreading ensures predictable performance and can decrease processing times for these workloads.

Hyperthreading implementation and configuration differs depending on the hardware you are running OKD on. Consult the relevant host hardware tuning information for more details of the hyperthreading implementation specific to that hardware. Disabling hyperthreading can increase the cost per core of the cluster.

Additional resources

Installing the Performance Addon Operator

Performance Addon Operator provides the ability to enable advanced node performance tunings on a set of nodes. As a cluster administrator, you can install Performance Addon Operator using the OKD CLI or the web console.

Installing the Operator using the CLI

As a cluster administrator, you can install the Operator using the CLI.

Prerequisites

  • A cluster installed on bare-metal hardware.

  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

Procedure

  1. Create a namespace for the Performance Addon Operator by completing the following actions:

    1. Create the following Namespace Custom Resource (CR) that defines the openshift-performance-addon-operator namespace, and then save the YAML in the pao-namespace.yaml file:

      1. apiVersion: v1
      2. kind: Namespace
      3. metadata:
      4. name: openshift-performance-addon-operator
      5. annotations:
      6. workload.openshift.io/allowed: management
    2. Create the namespace by running the following command:

      1. $ oc create -f pao-namespace.yaml
  2. Install the Performance Addon Operator in the namespace you created in the previous step by creating the following objects:

    1. Create the following OperatorGroup CR and save the YAML in the pao-operatorgroup.yaml file:

      1. apiVersion: operators.coreos.com/v1
      2. kind: OperatorGroup
      3. metadata:
      4. name: openshift-performance-addon-operator
      5. namespace: openshift-performance-addon-operator
    2. Create the OperatorGroup CR by running the following command:

      1. $ oc create -f pao-operatorgroup.yaml
    3. Run the following command to get the channel value required for the next step.

      1. $ oc get packagemanifest performance-addon-operator -n openshift-marketplace -o jsonpath='{.status.defaultChannel}'

      Example output

      1. 4.10
    4. Create the following Subscription CR and save the YAML in the pao-sub.yaml file:

      Example Subscription

      1. apiVersion: operators.coreos.com/v1alpha1
      2. kind: Subscription
      3. metadata:
      4. name: openshift-performance-addon-operator-subscription
      5. namespace: openshift-performance-addon-operator
      6. spec:
      7. channel: "<channel>" (1)
      8. name: performance-addon-operator
      9. source: redhat-operators (2)
      10. sourceNamespace: openshift-marketplace
      1Specify the value from you obtained in the previous step for the .status.defaultChannel parameter.
      2You must specify the redhat-operators value.
    5. Create the Subscription object by running the following command:

      1. $ oc create -f pao-sub.yaml
    6. Change to the openshift-performance-addon-operator project:

      1. $ oc project openshift-performance-addon-operator

Installing the Performance Addon Operator using the web console

As a cluster administrator, you can install the Performance Addon Operator using the web console.

You must create the Namespace CR and OperatorGroup CR as mentioned in the previous section.

Procedure

  1. Install the Performance Addon Operator using the OKD web console:

    1. In the OKD web console, click OperatorsOperatorHub.

    2. Choose Performance Addon Operator from the list of available Operators, and then click Install.

    3. On the Install Operator page, select All namespaces on the cluster. Then, click Install.

  2. Optional: Verify that the performance-addon-operator installed successfully:

    1. Switch to the OperatorsInstalled Operators page.

    2. Ensure that Performance Addon Operator is listed in the openshift-performance-addon-operator project with a Status of InstallSucceeded.

      During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.

      If the Operator does not appear as installed, to troubleshoot further:

      • Go to the OperatorsInstalled Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.

      • Go to the WorkloadsPods page and check the logs for pods in the performance-addon-operator project.

Upgrading Performance Addon Operator

You can manually upgrade to the next minor version of Performance Addon Operator and monitor the status of an update by using the web console.

About upgrading Performance Addon Operator

  • You can upgrade to the next minor version of Performance Addon Operator by using the OKD web console to change the channel of your Operator subscription.

  • You can enable automatic z-stream updates during Performance Addon Operator installation.

  • Updates are delivered via the Marketplace Operator, which is deployed during OKD installation.The Marketplace Operator makes external Operators available to your cluster.

  • The amount of time an update takes to complete depends on your network connection. Most automatic updates complete within fifteen minutes.

How Performance Addon Operator upgrades affect your cluster

  • Neither the low latency tuning nor huge pages are affected.

  • Updating the Operator should not cause any unexpected reboots.

Upgrading Performance Addon Operator to the next minor version

You can manually upgrade Performance Addon Operator to the next minor version by using the OKD web console to change the channel of your Operator subscription.

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.

Procedure

  1. Access the web console and navigate to OperatorsInstalled Operators.

  2. Click Performance Addon Operator to open the Operator details page.

  3. Click the Subscription tab to open the Subscription details page.

  4. In the Update channel pane, click the pencil icon on the right side of the version number to open the Change Subscription update channel window.

  5. Select the next minor version. For example, if you want to upgrade to Performance Addon Operator 4.10, select 4.10.

  6. Click Save.

  7. Check the status of the upgrade by navigating to Operators → Installed Operators. You can also check the status by running the following oc command:

    1. $ oc get csv -n openshift-performance-addon-operator

Upgrading Performance Addon Operator when previously installed to a specific namespace

If you previously installed the Performance Addon Operator to a specific namespace on the cluster, for example openshift-performance-addon-operator, modify the OperatorGroup object to remove the targetNamespaces entry before upgrading.

Prerequisites

  • Install the OKD CLI (oc).

  • Log in to the OpenShift cluster as a user with cluster-admin privileges.

Procedure

  1. Edit the Performance Addon Operator OperatorGroup CR and remove the spec element that contains the targetNamespaces entry by running the following command:

    1. $ oc patch operatorgroup -n openshift-performance-addon-operator openshift-performance-addon-operator --type json -p '[{ "op": "remove", "path": "/spec" }]'
  2. Wait until the Operator Lifecycle Manager (OLM) processes the change.

  3. Verify that the OperatorGroup CR change has been successfully applied. Check that the OperatorGroup CR spec element has been removed:

    1. $ oc describe -n openshift-performance-addon-operator og openshift-performance-addon-operator
  4. Proceed with the Performance Addon Operator upgrade.

Monitoring upgrade status

The best way to monitor Performance Addon Operator upgrade status is to watch the ClusterServiceVersion (CSV) PHASE. You can also monitor the CSV conditions in the web console or by running the oc get csv command.

The PHASE and conditions values are approximations that are based on available information.

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.

  • Install the OpenShift CLI (oc).

Procedure

  1. Run the following command:

    1. $ oc get csv
  2. Review the output, checking the PHASE field. For example:

    1. VERSION REPLACES PHASE
    2. 4.10.0 performance-addon-operator.v4.10.0 Installing
    3. 4.8.0 Replacing
  3. Run get csv again to verify the output:

    1. # oc get csv

    Example output

    1. NAME DISPLAY VERSION REPLACES PHASE
    2. performance-addon-operator.v4.10.0 Performance Addon Operator 4.10.0 performance-addon-operator.v4.8.0 Succeeded

Provisioning real-time and low latency workloads

Many industries and organizations need extremely high performance computing and might require low and predictable latency, especially in the financial and telecommunications industries. For these industries, with their unique requirements, OKD provides a Performance Addon Operator to implement automatic tuning to achieve low latency performance and consistent response time for OKD applications.

The cluster administrator can use this performance profile configuration to make these changes in a more reliable way. The administrator can specify whether to update the kernel to kernel-rt (real-time), reserve CPUs for cluster and operating system housekeeping duties, including pod infra containers, and isolate CPUs for application containers to run the workloads.

The usage of execution probes in conjunction with applications that require guaranteed CPUs can cause latency spikes. It is recommended to use other probes, such as a properly configured set of network probes, as an alternative.

Known limitations for real-time

In most deployments, kernel-rt is supported only on worker nodes when you use a standard cluster with three control plane nodes and three worker nodes. There are exceptions for compact and single nodes on OKD deployments. For installations on a single node, kernel-rt is supported on the single control plane node.

To fully utilize the real-time mode, the containers must run with elevated privileges. See Set capabilities for a Container for information on granting privileges.

OKD restricts the allowed capabilities, so you might need to create a SecurityContext as well.

This procedure is fully supported with bare metal installations using Fedora CoreOS (FCOS) systems.

Establishing the right performance expectations refers to the fact that the real-time kernel is not a panacea. Its objective is consistent, low-latency determinism offering predictable response times. There is some additional kernel overhead associated with the real-time kernel. This is due primarily to handling hardware interruptions in separately scheduled threads. The increased overhead in some workloads results in some degradation in overall throughput. The exact amount of degradation is very workload dependent, ranging from 0% to 30%. However, it is the cost of determinism.

Provisioning a worker with real-time capabilities

  1. Install Performance Addon Operator to the cluster.

  2. Optional: Add a node to the OKD cluster. See Setting BIOS parameters.

  3. Add the label worker-rt to the worker nodes that require the real-time capability by using the oc command.

  4. Create a new machine config pool for real-time nodes:

    1. apiVersion: machineconfiguration.openshift.io/v1
    2. kind: MachineConfigPool
    3. metadata:
    4. name: worker-rt
    5. labels:
    6. machineconfiguration.openshift.io/role: worker-rt
    7. spec:
    8. machineConfigSelector:
    9. matchExpressions:
    10. - {
    11. key: machineconfiguration.openshift.io/role,
    12. operator: In,
    13. values: [worker, worker-rt],
    14. }
    15. paused: false
    16. nodeSelector:
    17. matchLabels:
    18. node-role.kubernetes.io/worker-rt: ""

    Note that a machine config pool worker-rt is created for group of nodes that have the label worker-rt.

  5. Add the node to the proper machine config pool by using node role labels.

    You must decide which nodes are configured with real-time workloads. You could configure all of the nodes in the cluster, or a subset of the nodes. The Performance Addon Operator that expects all of the nodes are part of a dedicated machine config pool. If you use all of the nodes, you must point the Performance Addon Operator to the worker node role label. If you use a subset, you must group the nodes into a new machine config pool.

  6. Create the PerformanceProfile with the proper set of housekeeping cores and realTimeKernel: enabled: true.

  7. You must set machineConfigPoolSelector in PerformanceProfile:

    1. apiVersion: performance.openshift.io/v2
    2. kind: PerformanceProfile
    3. metadata:
    4. name: example-performanceprofile
    5. spec:
    6. ...
    7. realTimeKernel:
    8. enabled: true
    9. nodeSelector:
    10. node-role.kubernetes.io/worker-rt: ""
    11. machineConfigPoolSelector:
    12. machineconfiguration.openshift.io/role: worker-rt
  8. Verify that a matching machine config pool exists with a label:

    1. $ oc describe mcp/worker-rt

    Example output

    1. Name: worker-rt
    2. Namespace:
    3. Labels: machineconfiguration.openshift.io/role=worker-rt
  9. OKD will start configuring the nodes, which might involve multiple reboots. Wait for the nodes to settle. This can take a long time depending on the specific hardware you use, but 20 minutes per node is expected.

  10. Verify everything is working as expected.

Verifying the real-time kernel installation

Use this command to verify that the real-time kernel is installed:

  1. $ oc get node -o wide

Note the worker with the role worker-rt that contains the string 4.18.0-305.30.1.rt7.102.el8_4.x86_64 cri-o://1.23.0-99.rhaos4.10.gitc3131de.el8:

  1. NAME STATUS ROLES AGE VERSION INTERNAL-IP
  2. EXTERNAL-IP OS-IMAGE KERNEL-VERSION
  3. CONTAINER-RUNTIME
  4. rt-worker-0.example.com Ready worker,worker-rt 5d17h v1.23.0
  5. 128.66.135.107 <none> Red Hat Enterprise Linux CoreOS 46.82.202008252340-0 (Ootpa)
  6. 4.18.0-305.30.1.rt7.102.el8_4.x86_64 cri-o://1.23.0-99.rhaos4.10.gitc3131de.el8
  7. [...]

Creating a workload that works in real-time

Use the following procedures for preparing a workload that will use real-time capabilities.

Procedure

  1. Create a pod with a QoS class of Guaranteed.

  2. Optional: Disable CPU load balancing for DPDK.

  3. Assign a proper node selector.

When writing your applications, follow the general recommendations described in Application tuning and deployment.

Creating a pod with a QoS class of Guaranteed

Keep the following in mind when you create a pod that is given a QoS class of Guaranteed:

  • Every container in the pod must have a memory limit and a memory request, and they must be the same.

  • Every container in the pod must have a CPU limit and a CPU request, and they must be the same.

The following example shows the configuration file for a pod that has one container. The container has a memory limit and a memory request, both equal to 200 MiB. The container has a CPU limit and a CPU request, both equal to 1 CPU.

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: qos-demo
  5. namespace: qos-example
  6. spec:
  7. containers:
  8. - name: qos-demo-ctr
  9. image: <image-pull-spec>
  10. resources:
  11. limits:
  12. memory: "200Mi"
  13. cpu: "1"
  14. requests:
  15. memory: "200Mi"
  16. cpu: "1"
  1. Create the pod:

    1. $ oc apply -f qos-pod.yaml --namespace=qos-example
  2. View detailed information about the pod:

    1. $ oc get pod qos-demo --namespace=qos-example --output=yaml

    Example output

    1. spec:
    2. containers:
    3. ...
    4. status:
    5. qosClass: Guaranteed

    If a container specifies its own memory limit, but does not specify a memory request, OKD automatically assigns a memory request that matches the limit. Similarly, if a container specifies its own CPU limit, but does not specify a CPU request, OKD automatically assigns a CPU request that matches the limit.

Optional: Disabling CPU load balancing for DPDK

Functionality to disable or enable CPU load balancing is implemented on the CRI-O level. The code under the CRI-O disables or enables CPU load balancing only when the following requirements are met.

  • The pod must use the performance-<profile-name> runtime class. You can get the proper name by looking at the status of the performance profile, as shown here:

    1. apiVersion: performance.openshift.io/v2
    2. kind: PerformanceProfile
    3. ...
    4. status:
    5. ...
    6. runtimeClass: performance-manual
  • The pod must have the cpu-load-balancing.crio.io: true annotation.

The Performance Addon Operator is responsible for the creation of the high-performance runtime handler config snippet under relevant nodes and for creation of the high-performance runtime class under the cluster. It will have the same content as default runtime handler except it enables the CPU load balancing configuration functionality.

To disable the CPU load balancing for the pod, the Pod specification must include the following fields:

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. ...
  5. annotations:
  6. ...
  7. cpu-load-balancing.crio.io: "disable"
  8. ...
  9. ...
  10. spec:
  11. ...
  12. runtimeClassName: performance-<profile_name>
  13. ...

Only disable CPU load balancing when the CPU manager static policy is enabled and for pods with guaranteed QoS that use whole CPUs. Otherwise, disabling CPU load balancing can affect the performance of other containers in the cluster.

Assigning a proper node selector

The preferred way to assign a pod to nodes is to use the same node selector the performance profile used, as shown here:

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: example
  5. spec:
  6. # ...
  7. nodeSelector:
  8. node-role.kubernetes.io/worker-rt: ""

For more information, see Placing pods on specific nodes using node selectors.

Scheduling a workload onto a worker with real-time capabilities

Use label selectors that match the nodes attached to the machine config pool that was configured for low latency by the Performance Addon Operator. For more information, see Assigning pods to nodes.

Managing device interrupt processing for guaranteed pod isolated CPUs

The Performance Addon Operator can manage host CPUs by dividing them into reserved CPUs for cluster and operating system housekeeping duties, including pod infra containers, and isolated CPUs for application containers to run the workloads. This allows you to set CPUs for low latency workloads as isolated.

Device interrupts are load balanced between all isolated and reserved CPUs to avoid CPUs being overloaded, with the exception of CPUs where there is a guaranteed pod running. Guaranteed pod CPUs are prevented from processing device interrupts when the relevant annotations are set for the pod.

In the performance profile, globallyDisableIrqLoadBalancing is used to manage whether device interrupts are processed or not. For certain workloads the reserved CPUs are not always sufficient for dealing with device interrupts, and for this reason, device interrupts are not globally disabled on the isolated CPUs. By default, Performance Addon Operator does not disable device interrupts on isolated CPUs.

To achieve low latency for workloads, some (but not all) pods require the CPUs they are running on to not process device interrupts. A pod annotation, irq-load-balancing.crio.io, is used to define whether device interrupts are processed or not. When configured, CRI-O disables device interrupts only as long as the pod is running.

Disabling CPU CFS quota

To reduce CPU throttling for individual guaranteed pods, create a pod specification with the annotation cpu-quota.crio.io: "disable". This annotation disables the CPU completely fair scheduler (CFS) quota at the pod run time. The following pod specification contains this annotation:

  1. apiVersion: performance.openshift.io/v2
  2. kind: Pod
  3. metadata:
  4. annotations:
  5. cpu-quota.crio.io: "disable"
  6. spec:
  7. runtimeClassName: performance-<profile_name>
  8. ...

Only disable CPU CFS quota when the CPU manager static policy is enabled and for pods with guaranteed QoS that use whole CPUs. Otherwise, disabling CPU CFS quota can affect the performance of other containers in the cluster.

Disabling global device interrupts handling in Performance Addon Operator

To configure Performance Addon Operator to disable global device interrupts for the isolated CPU set, set the globallyDisableIrqLoadBalancing field in the performance profile to true. When true, conflicting pod annotations are ignored. When false, IRQ loads are balanced across all CPUs.

A performance profile snippet illustrates this setting:

  1. apiVersion: performance.openshift.io/v2
  2. kind: PerformanceProfile
  3. metadata:
  4. name: manual
  5. spec:
  6. globallyDisableIrqLoadBalancing: true
  7. ...

Disabling interrupt processing for individual pods

To disable interrupt processing for individual pods, ensure that globallyDisableIrqLoadBalancing is set to false in the performance profile. Then, in the pod specification, set the irq-load-balancing.crio.io pod annotation to disable. The following pod specification contains this annotation:

  1. apiVersion: performance.openshift.io/v2
  2. kind: Pod
  3. metadata:
  4. annotations:
  5. irq-load-balancing.crio.io: "disable"
  6. spec:
  7. runtimeClassName: performance-<profile_name>
  8. ...

Upgrading the performance profile to use device interrupt processing

When you upgrade the Performance Addon Operator performance profile custom resource definition (CRD) from v1 or v1alpha1 to v2, globallyDisableIrqLoadBalancing is set to true on existing profiles.

When globallyDisableIrqLoadBalancing is set to true, device interrupts are processed across all CPUs as long as they don’t belong to a guaranteed pod.

Supported API Versions

The Performance Addon Operator supports v2, v1, and v1alpha1 for the performance profile apiVersion field. The v1 and v1alpha1 APIs are identical. The v2 API includes an optional boolean field globallyDisableIrqLoadBalancing with a default value of false.

Upgrading Performance Addon Operator API from v1alpha1 to v1

When upgrading Performance Addon Operator API version from v1alpha1 to v1, the v1alpha1 performance profiles are converted on-the-fly using a “None” Conversion strategy and served to the Performance Addon Operator with API version v1.

Upgrading Performance Addon Operator API from v1alpha1 or v1 to v2

When upgrading from an older Performance Addon Operator API version, the existing v1 and v1alpha1 performance profiles are converted using a conversion webhook that injects the globallyDisableIrqLoadBalancing field with a value of true.

Configuring a node for IRQ dynamic load balancing

To configure a cluster node to handle IRQ dynamic load balancing, do the following:

  1. Log in to the OKD cluster as a user with cluster-admin privileges.

  2. Set the performance profile apiVersion to use performance.openshift.io/v2.

  3. Remove the globallyDisableIrqLoadBalancing field or set it to false.

  4. Set the appropriate isolated and reserved CPUs. The following snippet illustrates a profile that reserves 2 CPUs. IRQ load-balancing is enabled for pods running on the isolated CPU set:

    1. apiVersion: performance.openshift.io/v2
    2. kind: PerformanceProfile
    3. metadata:
    4. name: dynamic-irq-profile
    5. spec:
    6. cpu:
    7. isolated: 2-5
    8. reserved: 0-1
    9. ...

    When you configure reserved and isolated CPUs, the infra containers in pods use the reserved CPUs and the application containers use the isolated CPUs.

  5. Create the pod that uses exclusive CPUs, and set irq-load-balancing.crio.io and cpu-quota.crio.io annotations to disable. For example:

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: dynamic-irq-pod
    5. annotations:
    6. irq-load-balancing.crio.io: "disable"
    7. cpu-quota.crio.io: "disable"
    8. spec:
    9. containers:
    10. - name: dynamic-irq-pod
    11. image: "registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10"
    12. command: ["sleep", "10h"]
    13. resources:
    14. requests:
    15. cpu: 2
    16. memory: "200M"
    17. limits:
    18. cpu: 2
    19. memory: "200M"
    20. nodeSelector:
    21. node-role.kubernetes.io/worker-cnf: ""
    22. runtimeClassName: performance-dynamic-irq-profile
    23. ...
  6. Enter the pod runtimeClassName in the form performance-<profile_name>, where <profile_name> is the name from the PerformanceProfile YAML, in this example, performance-dynamic-irq-profile.

  7. Set the node selector to target a cnf-worker.

  8. Ensure the pod is running correctly. Status should be running, and the correct cnf-worker node should be set:

    1. $ oc get pod -o wide

    Expected output

    1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    2. dynamic-irq-pod 1/1 Running 0 5h33m <ip-address> <node-name> <none> <none>
  9. Get the CPUs that the pod configured for IRQ dynamic load balancing runs on:

    1. $ oc exec -it dynamic-irq-pod -- /bin/bash -c "grep Cpus_allowed_list /proc/self/status | awk '{print $2}'"

    Expected output

    1. Cpus_allowed_list: 2-3
  10. Ensure the node configuration is applied correctly. SSH into the node to verify the configuration.

    1. $ oc debug node/<node-name>

    Expected output

    1. Starting pod/<node-name>-debug ...
    2. To use host binaries, run `chroot /host`
    3. Pod IP: <ip-address>
    4. If you don't see a command prompt, try pressing enter.
    5. sh-4.4#
  11. Verify that you can use the node file system:

    1. sh-4.4# chroot /host

    Expected output

    1. sh-4.4#
  12. Ensure the default system CPU affinity mask does not include the dynamic-irq-pod CPUs, for example, CPUs 2 and 3.

    1. $ cat /proc/irq/default_smp_affinity

    Example output

    1. 33
  13. Ensure the system IRQs are not configured to run on the dynamic-irq-pod CPUs:

    1. find /proc/irq/ -name smp_affinity_list -exec sh -c 'i="$1"; mask=$(cat $i); file=$(echo $i); echo $file: $mask' _ {} \;

    Example output

    1. /proc/irq/0/smp_affinity_list: 0-5
    2. /proc/irq/1/smp_affinity_list: 5
    3. /proc/irq/2/smp_affinity_list: 0-5
    4. /proc/irq/3/smp_affinity_list: 0-5
    5. /proc/irq/4/smp_affinity_list: 0
    6. /proc/irq/5/smp_affinity_list: 0-5
    7. /proc/irq/6/smp_affinity_list: 0-5
    8. /proc/irq/7/smp_affinity_list: 0-5
    9. /proc/irq/8/smp_affinity_list: 4
    10. /proc/irq/9/smp_affinity_list: 4
    11. /proc/irq/10/smp_affinity_list: 0-5
    12. /proc/irq/11/smp_affinity_list: 0
    13. /proc/irq/12/smp_affinity_list: 1
    14. /proc/irq/13/smp_affinity_list: 0-5
    15. /proc/irq/14/smp_affinity_list: 1
    16. /proc/irq/15/smp_affinity_list: 0
    17. /proc/irq/24/smp_affinity_list: 1
    18. /proc/irq/25/smp_affinity_list: 1
    19. /proc/irq/26/smp_affinity_list: 1
    20. /proc/irq/27/smp_affinity_list: 5
    21. /proc/irq/28/smp_affinity_list: 1
    22. /proc/irq/29/smp_affinity_list: 0
    23. /proc/irq/30/smp_affinity_list: 0-5

Some IRQ controllers do not support IRQ re-balancing and will always expose all online CPUs as the IRQ mask. These IRQ controllers effectively run on CPU 0. For more information on the host configuration, SSH into the host and run the following, replacing <irq-num> with the CPU number that you want to query:

  1. $ cat /proc/irq/<irq-num>/effective_affinity

Configuring hyperthreading for a cluster

To configure hyperthreading for an OKD cluster, set the CPU threads in the performance profile to the same cores that are configured for the reserved or isolated CPU pools.

If you configure a performance profile, and subsequently change the hyperthreading configuration for the host, ensure that you update the CPU isolated and reserved fields in the PerformanceProfile YAML to match the new configuration.

Disabling a previously enabled host hyperthreading configuration can cause the CPU core IDs listed in the PerformanceProfile YAML to be incorrect. This incorrect configuration can cause the node to become unavailable because the listed CPUs can no longer be found.

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.

  • Install the OpenShift CLI (oc).

Procedure

  1. Ascertain which threads are running on what CPUs for the host you want to configure.

    You can view which threads are running on the host CPUs by logging in to the cluster and running the following command:

    1. $ lscpu --all --extended

    Example output

    1. CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ
    2. 0 0 0 0 0:0:0:0 yes 4800.0000 400.0000
    3. 1 0 0 1 1:1:1:0 yes 4800.0000 400.0000
    4. 2 0 0 2 2:2:2:0 yes 4800.0000 400.0000
    5. 3 0 0 3 3:3:3:0 yes 4800.0000 400.0000
    6. 4 0 0 0 0:0:0:0 yes 4800.0000 400.0000
    7. 5 0 0 1 1:1:1:0 yes 4800.0000 400.0000
    8. 6 0 0 2 2:2:2:0 yes 4800.0000 400.0000
    9. 7 0 0 3 3:3:3:0 yes 4800.0000 400.0000

    In this example, there are eight logical CPU cores running on four physical CPU cores. CPU0 and CPU4 are running on physical Core0, CPU1 and CPU5 are running on physical Core 1, and so on.

    Alternatively, to view the threads that are set for a particular physical CPU core (cpu0 in the example below), open a command prompt and run the following:

    1. $ cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list

    Example output

    1. 0-4
  2. Apply the isolated and reserved CPUs in the PerformanceProfile YAML. For example, you can set logical cores CPU0 and CPU4 as isolated, and logical cores CPU1 to CPU3 and CPU5 to CPU7 as reserved. When you configure reserved and isolated CPUs, the infra containers in pods use the reserved CPUs and the application containers use the isolated CPUs.

    1. ...
    2. cpu:
    3. isolated: 0,4
    4. reserved: 1-3,5-7
    5. ...

    The reserved and isolated CPU pools must not overlap and together must span all available cores in the worker node.

Hyperthreading is enabled by default on most Intel processors. If you enable hyperthreading, all threads processed by a particular core must be isolated or processed on the same core.

Disabling hyperthreading for low latency applications

When configuring clusters for low latency processing, consider whether you want to disable hyperthreading before you deploy the cluster. To disable hyperthreading, do the following:

  1. Create a performance profile that is appropriate for your hardware and topology.

  2. Set nosmt as an additional kernel argument. The following example performance profile illustrates this setting:

    1. apiVersion: performance.openshift.io/v2
    2. kind: PerformanceProfile
    3. metadata:
    4. name: example-performanceprofile
    5. spec:
    6. additionalKernelArgs:
    7. - nmi_watchdog=0
    8. - audit=0
    9. - mce=off
    10. - processor.max_cstate=1
    11. - idle=poll
    12. - intel_idle.max_cstate=0
    13. - nosmt
    14. cpu:
    15. isolated: 2-3
    16. reserved: 0-1
    17. hugepages:
    18. defaultHugepagesSize: 1G
    19. pages:
    20. - count: 2
    21. node: 0
    22. size: 1G
    23. nodeSelector:
    24. node-role.kubernetes.io/performance: ''
    25. realTimeKernel:
    26. enabled: true

    When you configure reserved and isolated CPUs, the infra containers in pods use the reserved CPUs and the application containers use the isolated CPUs.

Tuning nodes for low latency with the performance profile

The performance profile lets you control latency tuning aspects of nodes that belong to a certain machine config pool. After you specify your settings, the PerformanceProfile object is compiled into multiple objects that perform the actual node level tuning:

  • A MachineConfig file that manipulates the nodes.

  • A KubeletConfig file that configures the Topology Manager, the CPU Manager, and the OKD nodes.

  • The Tuned profile that configures the Node Tuning Operator.

You can use a performance profile to specify whether to update the kernel to kernel-rt, to allocate huge pages, and to partition the CPUs for performing housekeeping duties or running workloads.

You can manually create the PerformanceProfile object or use the Performance Profile Creator (PPC) to generate a performance profile. See the additional resources below for more information on the PPC.

Sample performance profile

  1. apiVersion: performance.openshift.io/v2
  2. kind: PerformanceProfile
  3. metadata:
  4. name: performance
  5. spec:
  6. cpu:
  7. isolated: "5-15" (1)
  8. reserved: "0-4" (2)
  9. hugepages:
  10. defaultHugepagesSize: "1G"
  11. pages:
  12. - size: "1G"
  13. count: 16
  14. node: 0
  15. realTimeKernel:
  16. enabled: true (3)
  17. numa: (4)
  18. topologyPolicy: "best-effort"
  19. nodeSelector:
  20. node-role.kubernetes.io/worker-cnf: "" (5)
1Use this field to isolate specific CPUs to use with application containers for workloads.
2Use this field to reserve specific CPUs to use with infra containers for housekeeping.
3Use this field to install the real-time kernel on the node. Valid values are true or false. Setting the true value installs the real-time kernel.
4Use this field to configure the topology manager policy. Valid values are none (default), best-effort, restricted, and single-numa-node. For more information, see Topology Manager Policies.
5Use this field to specify a node selector to apply the performance profile to specific nodes.

Additional resources

Configuring huge pages

Nodes must pre-allocate huge pages used in an OKD cluster. Use the Performance Addon Operator to allocate huge pages on a specific node.

OKD provides a method for creating and allocating huge pages. Performance Addon Operator provides an easier method for doing this using the performance profile.

For example, in the hugepages pages section of the performance profile, you can specify multiple blocks of size, count, and, optionally, node:

  1. hugepages:
  2. defaultHugepagesSize: "1G"
  3. pages:
  4. - size: "1G"
  5. count: 4
  6. node: 0 (1)
1node is the NUMA node in which the huge pages are allocated. If you omit node, the pages are evenly spread across all NUMA nodes.

Wait for the relevant machine config pool status that indicates the update is finished.

These are the only configuration steps you need to do to allocate huge pages.

Verification

  • To verify the configuration, see the /proc/meminfo file on the node:

    1. $ oc debug node/ip-10-0-141-105.ec2.internal
    1. # grep -i huge /proc/meminfo

    Example output

    1. AnonHugePages: ###### ##
    2. ShmemHugePages: 0 kB
    3. HugePages_Total: 2
    4. HugePages_Free: 2
    5. HugePages_Rsvd: 0
    6. HugePages_Surp: 0
    7. Hugepagesize: #### ##
    8. Hugetlb: #### ##
  • Use oc describe to report the new size:

    1. $ oc describe node worker-0.ocp4poc.example.com | grep -i huge

    Example output

    1. hugepages-1g=true
    2. hugepages-###: ###
    3. hugepages-###: ###

Allocating multiple huge page sizes

You can request huge pages with different sizes under the same container. This allows you to define more complicated pods consisting of containers with different huge page size needs.

For example, you can define sizes 1G and 2M and the Performance Addon Operator will configure both sizes on the node, as shown here:

  1. spec:
  2. hugepages:
  3. defaultHugepagesSize: 1G
  4. pages:
  5. - count: 1024
  6. node: 0
  7. size: 2M
  8. - count: 4
  9. node: 1
  10. size: 1G

Restricting CPUs for infra and application containers

Generic housekeeping and workload tasks use CPUs in a way that may impact latency-sensitive processes. By default, the container runtime uses all online CPUs to run all containers together, which can result in context switches and spikes in latency. Partitioning the CPUs prevents noisy processes from interfering with latency-sensitive processes by separating them from each other. The following table describes how processes run on a CPU after you have tuned the node using the Performance Add-On Operator:

Table 1. Process’ CPU assignments
Process typeDetails

Burstable and best-effort pods

Runs on any CPU except where low latency workload is running

Infrastructure pods

Runs on any CPU except where low latency workload is running

Interrupts

Redirects to reserved CPUs (optional in OKD 4.10 and later)

Kernel processes

Pins to reserved CPUs

Latency-sensitive workload pods

Pins to a specific set of exclusive CPUs from the isolated pool

OS processes/systemd services

Pins to reserved CPUs

The exact partitioning pattern to use depends on many factors like hardware, workload characteristics and the expected system load. Some sample use cases are as follows:

  • If the latency-sensitive workload uses specific hardware, such as a network interface controller (NIC), ensure that the CPUs in the isolated pool are as close as possible to this hardware. At a minimum, you should place the workload in the same Non-Uniform Memory Access (NUMA) node.

  • The reserved pool is used for handling all interrupts. When depending on system networking, allocate a sufficiently-sized reserve pool to handle all the incoming packet interrupts. In 4.10 and later versions, workloads can optionally be labeled as sensitive.

The decision regarding which specific CPUs should be used for reserved and isolated partitions requires detailed analysis and measurements. Factors like NUMA affinity of devices and memory play a role. The selection also depends on the workload architecture and the specific use case.

The reserved and isolated CPU pools must not overlap and together must span all available cores in the worker node.

To ensure that housekeeping tasks and workloads do not interfere with each other, specify two groups of CPUs in the spec section of the performance profile.

  • isolated - Specifies the CPUs for the application container workloads. These CPUs have the lowest latency. Processes in this group have no interruptions and can, for example, reach much higher DPDK zero packet loss bandwidth.

  • reserved - Specifies the CPUs for the cluster and operating system housekeeping duties. Threads in the reserved group are often busy. Do not run latency-sensitive applications in the reserved group. Latency-sensitive applications run in the isolated group.

Procedure

  1. Create a performance profile appropriate for the environment’s hardware and topology.

  2. Add the reserved and isolated parameters with the CPUs you want reserved and isolated for the infra and application containers:

    1. apiVersion: performance.openshift.io/v2
    2. kind: PerformanceProfile
    3. metadata:
    4. name: infra-cpus
    5. spec:
    6. cpu:
    7. reserved: "0-4,9" (1)
    8. isolated: "5-8" (2)
    9. nodeSelector: (3)
    10. node-role.kubernetes.io/worker: ""
    1Specify which CPUs are for infra containers to perform cluster and operating system housekeeping duties.
    2Specify which CPUs are for application containers to run workloads.
    3Optional: Specify a node selector to apply the performance profile to specific nodes.

Additional resources

Reducing NIC queues using the Performance Addon Operator

The Performance Addon Operator allows you to adjust the network interface controller (NIC) queue count for each network device by configuring the performance profile. Device network queues allows the distribution of packets among different physical queues and each queue gets a separate thread for packet processing.

In real-time or low latency systems, all the unnecessary interrupt request lines (IRQs) pinned to the isolated CPUs must be moved to reserved or housekeeping CPUs.

In deployments with applications that require system, OKD networking or in mixed deployments with Data Plane Development Kit (DPDK) workloads, multiple queues are needed to achieve good throughput and the number of NIC queues should be adjusted or remain unchanged. For example, to achieve low latency the number of NIC queues for DPDK based workloads should be reduced to just the number of reserved or housekeeping CPUs.

Too many queues are created by default for each CPU and these do not fit into the interrupt tables for housekeeping CPUs when tuning for low latency. Reducing the number of queues makes proper tuning possible. Smaller number of queues means a smaller number of interrupts that then fit in the IRQ table.

Adjusting the NIC queues with the performance profile

The performance profile lets you adjust the queue count for each network device.

Supported network devices:

  • Non-virtual network devices

  • Network devices that support multiple queues (channels)

Unsupported network devices:

  • Pure software network interfaces

  • Block devices

  • Intel DPDK virtual functions

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.

  • Install the OpenShift CLI (oc).

Procedure

  1. Log in to the OKD cluster running the Performance Addon Operator as a user with cluster-admin privileges.

  2. Create and apply a performance profile appropriate for your hardware and topology. For guidance on creating a profile, see the “Creating a performance profile” section.

  3. Edit this created performance profile:

    1. $ oc edit -f <your_profile_name>.yaml
  4. Populate the spec field with the net object. The object list can contain two fields:

    • userLevelNetworking is a required field specified as a boolean flag. If userLevelNetworking is true, the queue count is set to the reserved CPU count for all supported devices. The default is false.

    • devices is an optional field specifying a list of devices that will have the queues set to the reserved CPU count. If the device list is empty, the configuration applies to all network devices. The configuration is as follows:

      • interfaceName: This field specifies the interface name, and it supports shell-style wildcards, which can be positive or negative.

        • Example wildcard syntax is as follows: <string> .*

        • Negative rules are prefixed with an exclamation mark. To apply the net queue changes to all devices other than the excluded list, use !<device>, for example, !eno1.

      • vendorID: The network device vendor ID represented as a 16-bit hexadecimal number with a 0x prefix.

      • deviceID: The network device ID (model) represented as a 16-bit hexadecimal number with a 0x prefix.

        When a deviceID is specified, the vendorID must also be defined. A device that matches all of the device identifiers specified in a device entry interfaceName, vendorID, or a pair of vendorID plus deviceID qualifies as a network device. This network device then has its net queues count set to the reserved CPU count.

        When two or more devices are specified, the net queues count is set to any net device that matches one of them.

  1. Set the queue count to the reserved CPU count for all devices by using this example performance profile:

    1. apiVersion: performance.openshift.io/v2
    2. kind: PerformanceProfile
    3. metadata:
    4. name: manual
    5. spec:
    6. cpu:
    7. isolated: 3-51,54-103
    8. reserved: 0-2,52-54
    9. net:
    10. userLevelNetworking: true
    11. nodeSelector:
    12. node-role.kubernetes.io/worker-cnf: ""
  2. Set the queue count to the reserved CPU count for all devices matching any of the defined device identifiers by using this example performance profile:

    1. apiVersion: performance.openshift.io/v2
    2. kind: PerformanceProfile
    3. metadata:
    4. name: manual
    5. spec:
    6. cpu:
    7. isolated: 3-51,54-103
    8. reserved: 0-2,52-54
    9. net:
    10. userLevelNetworking: true
    11. devices:
    12. - interfaceName: eth0
    13. - interfaceName: eth1
    14. - vendorID: 0x1af4
    15. - deviceID: 0x1000
    16. nodeSelector:
    17. node-role.kubernetes.io/worker-cnf: ""
  3. Set the queue count to the reserved CPU count for all devices starting with the interface name eth by using this example performance profile:

    1. apiVersion: performance.openshift.io/v2
    2. kind: PerformanceProfile
    3. metadata:
    4. name: manual
    5. spec:
    6. cpu:
    7. isolated: 3-51,54-103
    8. reserved: 0-2,52-54
    9. net:
    10. userLevelNetworking: true
    11. devices:
    12. - interfaceName: eth*”
    13. nodeSelector:
    14. node-role.kubernetes.io/worker-cnf: ""
  4. Set the queue count to the reserved CPU count for all devices with an interface named anything other than eno1 by using this example performance profile:

    1. apiVersion: performance.openshift.io/v2
    2. kind: PerformanceProfile
    3. metadata:
    4. name: manual
    5. spec:
    6. cpu:
    7. isolated: 3-51,54-103
    8. reserved: 0-2,52-54
    9. net:
    10. userLevelNetworking: true
    11. devices:
    12. - interfaceName: “!eno1
    13. nodeSelector:
    14. node-role.kubernetes.io/worker-cnf: ""
  5. Set the queue count to the reserved CPU count for all devices that have an interface name eth0, vendorID of 0x1af4, and deviceID of 0x1000 by using this example performance profile:

    1. apiVersion: performance.openshift.io/v2
    2. kind: PerformanceProfile
    3. metadata:
    4. name: manual
    5. spec:
    6. cpu:
    7. isolated: 3-51,54-103
    8. reserved: 0-2,52-54
    9. net:
    10. userLevelNetworking: true
    11. devices:
    12. - interfaceName: eth0
    13. - vendorID: 0x1af4
    14. - deviceID: 0x1000
    15. nodeSelector:
    16. node-role.kubernetes.io/worker-cnf: ""
  6. Apply the updated performance profile:

    1. $ oc apply -f <your_profile_name>.yaml

Additional resources

Verifying the queue status

In this section, a number of examples illustrate different performance profiles and how to verify the changes are applied.

Example 1

In this example, the net queue count is set to the reserved CPU count (2) for all supported devices.

The relevant section from the performance profile is:

  1. apiVersion: performance.openshift.io/v2
  2. metadata:
  3. name: performance
  4. spec:
  5. kind: PerformanceProfile
  6. spec:
  7. cpu:
  8. reserved: 0-1 #total = 2
  9. isolated: 2-8
  10. net:
  11. userLevelNetworking: true
  12. # ...
  • Display the status of the queues associated with a device using the following command:

    Run this command on the node where the performance profile was applied.

    1. $ ethtool -l <device>
  • Verify the queue status before the profile is applied:

    1. $ ethtool -l ens4

    Example output

    1. Channel parameters for ens4:
    2. Pre-set maximums:
    3. RX: 0
    4. TX: 0
    5. Other: 0
    6. Combined: 4
    7. Current hardware settings:
    8. RX: 0
    9. TX: 0
    10. Other: 0
    11. Combined: 4
  • Verify the queue status after the profile is applied:

    1. $ ethtool -l ens4

    Example output

    1. Channel parameters for ens4:
    2. Pre-set maximums:
    3. RX: 0
    4. TX: 0
    5. Other: 0
    6. Combined: 4
    7. Current hardware settings:
    8. RX: 0
    9. TX: 0
    10. Other: 0
    11. Combined: 2 (1)
1The combined channel shows that the total count of reserved CPUs for all supported devices is 2. This matches what is configured in the performance profile.

Example 2

In this example, the net queue count is set to the reserved CPU count (2) for all supported network devices with a specific vendorID.

The relevant section from the performance profile is:

  1. apiVersion: performance.openshift.io/v2
  2. metadata:
  3. name: performance
  4. spec:
  5. kind: PerformanceProfile
  6. spec:
  7. cpu:
  8. reserved: 0-1 #total = 2
  9. isolated: 2-8
  10. net:
  11. userLevelNetworking: true
  12. devices:
  13. - vendorID = 0x1af4
  14. # ...
  • Display the status of the queues associated with a device using the following command:

    Run this command on the node where the performance profile was applied.

    1. $ ethtool -l <device>
  • Verify the queue status after the profile is applied:

    1. $ ethtool -l ens4

    Example output

    1. Channel parameters for ens4:
    2. Pre-set maximums:
    3. RX: 0
    4. TX: 0
    5. Other: 0
    6. Combined: 4
    7. Current hardware settings:
    8. RX: 0
    9. TX: 0
    10. Other: 0
    11. Combined: 2 (1)
1The total count of reserved CPUs for all supported devices with vendorID=0x1af4 is 2. For example, if there is another network device ens2 with vendorID=0x1af4 it will also have total net queues of 2. This matches what is configured in the performance profile.

Example 3

In this example, the net queue count is set to the reserved CPU count (2) for all supported network devices that match any of the defined device identifiers.

The command udevadm info provides a detailed report on a device. In this example the devices are:

  1. # udevadm info -p /sys/class/net/ens4
  2. ...
  3. E: ID_MODEL_ID=0x1000
  4. E: ID_VENDOR_ID=0x1af4
  5. E: INTERFACE=ens4
  6. ...
  1. # udevadm info -p /sys/class/net/eth0
  2. ...
  3. E: ID_MODEL_ID=0x1002
  4. E: ID_VENDOR_ID=0x1001
  5. E: INTERFACE=eth0
  6. ...
  • Set the net queues to 2 for a device with interfaceName equal to eth0 and any devices that have a vendorID=0x1af4 with the following performance profile:

    1. apiVersion: performance.openshift.io/v2
    2. metadata:
    3. name: performance
    4. spec:
    5. kind: PerformanceProfile
    6. spec:
    7. cpu:
    8. reserved: 0-1 #total = 2
    9. isolated: 2-8
    10. net:
    11. userLevelNetworking: true
    12. devices:
    13. - interfaceName = eth0
    14. - vendorID = 0x1af4
    15. ...
  • Verify the queue status after the profile is applied:

    1. $ ethtool -l ens4

    Example output

    1. Channel parameters for ens4:
    2. Pre-set maximums:
    3. RX: 0
    4. TX: 0
    5. Other: 0
    6. Combined: 4
    7. Current hardware settings:
    8. RX: 0
    9. TX: 0
    10. Other: 0
    11. Combined: 2 (1)
    1The total count of reserved CPUs for all supported devices with vendorID=0x1af4 is set to 2. For example, if there is another network device ens2 with vendorID=0x1af4, it will also have the total net queues set to 2. Similarly, a device with interfaceName equal to eth0 will have total net queues set to 2.

Logging associated with adjusting NIC queues

Log messages detailing the assigned devices are recorded in the respective Tuned daemon logs. The following messages might be recorded to the /var/log/tuned/tuned.log file:

  • An INFO message is recorded detailing the successfully assigned devices:

    1. INFO tuned.plugins.base: instance net_test (net): assigning devices ens1, ens2, ens3
  • A WARNING message is recorded if none of the devices can be assigned:

    1. WARNING tuned.plugins.base: instance net_test: no matching devices available

Performing end-to-end tests for platform verification

The Cloud-native Network Functions (CNF) tests image is a containerized test suite that validates features required to run CNF payloads. You can use this image to validate a CNF-enabled OpenShift cluster where all the components required for running CNF workloads are installed.

The tests run by the image are split into three different phases:

  • Simple cluster validation

  • Setup

  • End to end tests

The validation phase checks that all the features required to be tested are deployed correctly on the cluster.

Validations include:

  • Targeting a machine config pool that belong to the machines to be tested

  • Enabling SCTP on the nodes

  • Enabling xt_u32 kernel module via machine config

  • Having the Performance Addon Operator installed

  • Having the SR-IOV Operator installed

  • Having the PTP Operator installed

  • Enabling the contain-mount-namespace mode via machine config

  • Using OVN-kubernetes as the cluster network provider

Latency tests, a part of the CNF-test container, also require the same validations. For more information about running a latency test, see the Running the latency tests section.

The tests need to perform an environment configuration every time they are executed. This involves items such as creating SR-IOV node policies, performance profiles, or PTP profiles. Allowing the tests to configure an already configured cluster might affect the functionality of the cluster. Also, changes to configuration items such as SR-IOV node policy might result in the environment being temporarily unavailable until the configuration change is processed.

Prerequisites

  • The test entrypoint is /usr/bin/test-run.sh. It runs both a setup test set and the real conformance test suite. The minimum requirement is to provide it with a kubeconfig file and its related $KUBECONFIG environment variable, mounted through a volume.

  • The tests assumes that a given feature is already available on the cluster in the form of an Operator, flags enabled on the cluster, or machine configs.

  • Some tests require a pre-existing machine config pool to append their changes to. This must be created on the cluster before running the tests.

    The default worker pool is worker-cnf and can be created with the following manifest:

    1. apiVersion: machineconfiguration.openshift.io/v1
    2. kind: MachineConfigPool
    3. metadata:
    4. name: worker-cnf
    5. labels:
    6. machineconfiguration.openshift.io/role: worker-cnf
    7. spec:
    8. machineConfigSelector:
    9. matchExpressions:
    10. - {
    11. key: machineconfiguration.openshift.io/role,
    12. operator: In,
    13. values: [worker-cnf, worker],
    14. }
    15. paused: false
    16. nodeSelector:
    17. matchLabels:
    18. node-role.kubernetes.io/worker-cnf: ""

    You can use the ROLE_WORKER_CNF variable to override the worker pool name:

    1. $ docker run -v $(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig -e
    2. ROLE_WORKER_CNF=custom-worker-pool registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/test-run.sh

    Currently, not all tests run selectively on the nodes belonging to the pool.

Dry run

Use this command to run in dry-run mode. This is useful for checking what is in the test suite and provides output for all of the tests the image would run.

  1. $ docker run -v $(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/test-run.sh -ginkgo.dryRun -ginkgo.v

Disconnected mode

The CNF tests image support running tests in a disconnected cluster, meaning a cluster that is not able to reach outer registries. This is done in two steps:

  1. Performing the mirroring.

  2. Instructing the tests to consume the images from a custom registry.

Mirroring the images to a custom registry accessible from the cluster

A mirror executable is shipped in the image to provide the input required by oc to mirror the images needed to run the tests to a local registry.

Run this command from an intermediate machine that has access both to the cluster and to registry.redhat.io over the internet:

  1. $ docker run -v $(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/mirror -registry my.local.registry:5000/ | oc image mirror -f -

Then, follow the instructions in the following section about overriding the registry used to fetch the images.

Instruct the tests to consume those images from a custom registry

This is done by setting the IMAGE_REGISTRY environment variable:

  1. $ docker run -v $(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig -e IMAGE_REGISTRY="my.local.registry:5000/" -e CNF_TESTS_IMAGE="custom-cnf-tests-image:latests" registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/test-run.sh

Mirroring to the cluster internal registry

OKD provides a built-in container image registry, which runs as a standard workload on the cluster.

Procedure

  1. Gain external access to the registry by exposing it with a route:

    1. $ oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
  2. Fetch the registry endpoint:

    1. REGISTRY=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')
  3. Create a namespace for exposing the images:

    1. $ oc create ns cnftests
  4. Make that image stream available to all the namespaces used for tests. This is required to allow the tests namespaces to fetch the images from the cnftests image stream.

    1. $ oc policy add-role-to-user system:image-puller system:serviceaccount:sctptest:default --namespace=cnftests
    1. $ oc policy add-role-to-user system:image-puller system:serviceaccount:cnf-features-testing:default --namespace=cnftests
    1. $ oc policy add-role-to-user system:image-puller system:serviceaccount:performance-addon-operators-testing:default --namespace=cnftests
    1. $ oc policy add-role-to-user system:image-puller system:serviceaccount:dpdk-testing:default --namespace=cnftests
    1. $ oc policy add-role-to-user system:image-puller system:serviceaccount:sriov-conformance-testing:default --namespace=cnftests
    1. $ oc policy add-role-to-user system:image-puller system:serviceaccount:xt-u32-testing:default --namespace=cnftests
    1. $ oc policy add-role-to-user system:image-puller system:serviceaccount:vrf-testing:default --namespace=cnftests
    1. $ oc policy add-role-to-user system:image-puller system:serviceaccount:gatekeeper-testing:default --namespace=cnftests
    1. $ oc policy add-role-to-user system:image-puller system:serviceaccount:ovs-qos-testing:default --namespace=cnftests
  5. Retrieve the docker secret name and auth token:

    1. SECRET=$(oc -n cnftests get secret | grep builder-docker | awk {'print $1'}
    2. TOKEN=$(oc -n cnftests get secret $SECRET -o jsonpath="{.data['\.dockercfg']}" | base64 --decode | jq '.["image-registry.openshift-image-registry.svc:5000"].auth')
  6. Write a dockerauth.json similar to this:

    1. echo "{\"auths\": { \"$REGISTRY\": { \"auth\": $TOKEN } }}" > dockerauth.json
  7. Do the mirroring:

    1. $ docker run -v $(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/mirror -registry $REGISTRY/cnftests | oc image mirror --insecure=true -a=$(pwd)/dockerauth.json -f -
  8. Run the tests:

    1. $ docker run -v $(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig -e IMAGE_REGISTRY=image-registry.openshift-image-registry.svc:5000/cnftests cnf-tests-local:latest /usr/bin/test-run.sh

Mirroring a different set of images

Procedure

  1. The mirror command tries to mirror the u/s images by default. This can be overridden by passing a file with the following format to the image:

    1. [
    2. {
    3. "registry": "public.registry.io:5000",
    4. "image": "imageforcnftests:4.10"
    5. },
    6. {
    7. "registry": "public.registry.io:5000",
    8. "image": "imagefordpdk:4.10"
    9. }
    10. ]
  2. Pass it to the mirror command, for example saving it locally as images.json. With the following command, the local path is mounted in /kubeconfig inside the container and that can be passed to the mirror command.

    1. $ docker run -v $(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/mirror --registry "my.local.registry:5000/" --images "/kubeconfig/images.json" | oc image mirror -f -

Running in a single node cluster

Running tests on a single node cluster causes the following limitations to be imposed:

  • Longer timeouts for certain tests, including SR-IOV and SCTP tests

  • Tests requiring master and worker nodes are skipped

Longer timeouts concern SR-IOV and SCTP tests. Reconfiguration requiring node reboots cause a reboot of the entire environment, including the OpenShift control plane, and therefore takes longer to complete. All PTP tests requiring a master and worker node are skipped. No additional configuration is needed because the tests check for the number of nodes at startup and adjust test behavior accordingly.

PTP tests can run in Discovery mode. The tests look for a PTP master configured outside of the cluster.

For more information, see the Discovery mode section.

To enable Discovery mode, the tests must be instructed by setting the DISCOVERY_MODE environment variable as follows:

  1. $ docker run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e
  2. DISCOVERY_MODE=true registry.redhat.io/openshift-kni/cnf-tests /usr/bin/test-run.sh

Required parameters

  • ROLE_WORKER_CNF=master - Required because master is the only machine pool to which the node will belong.

  • XT_U32TEST_HAS_NON_CNF_WORKERS=false - Required to instruct the xt_u32 negative test to skip because there are only nodes where the module is loaded.

  • SCTPTEST_HAS_NON_CNF_WORKERS=false - Required to instruct the SCTP negative test to skip because there are only nodes where the module is loaded.

Impact of tests on the cluster

Depending on the feature, running the test suite could cause different impacts on the cluster. In general, only the SCTP tests do not change the cluster configuration. All of the other features have various impacts on the configuration.

SCTP

SCTP tests just run different pods on different nodes to check connectivity. The impacts on the cluster are related to running simple pods on two nodes.

XT_U32

XT_U32 tests run pods on different nodes to check iptables rule that utilize xt_u32. The impacts on the cluster are related to running simple pods on two nodes.

SR-IOV

SR-IOV tests require changes in the SR-IOV network configuration, where the tests create and destroy different types of configuration.

This might have an impact if existing SR-IOV network configurations are already installed on the cluster, because there may be conflicts depending on the priority of such configurations.

At the same time, the result of the tests might be affected by existing configurations.

PTP

PTP tests apply a PTP configuration to a set of nodes of the cluster. As with SR-IOV, this might conflict with any existing PTP configuration already in place, with unpredictable results.

Performance

Performance tests apply a performance profile to the cluster. The effect of this is changes in the node configuration, reserving CPUs, allocating memory huge pages, and setting the kernel packages to be realtime. If an existing profile named performance is already available on the cluster, the tests do not deploy it.

DPDK

DPDK relies on both performance and SR-IOV features, so the test suite configures both a performance profile and SR-IOV networks, so the impacts are the same as those described in SR-IOV testing and performance testing.

Container-mount-namespace

The validation test for container-mount-namespace mode only checks that the appropriate MachineConfig objects are present and active, and has no additional impact on the node.

Cleaning up

After running the test suite, all the dangling resources are cleaned up.

Override test image parameters

Depending on the requirements, the tests can use different images. There are two images used by the tests that can be changed using the following environment variables:

  • CNF_TESTS_IMAGE

  • DPDK_TESTS_IMAGE

For example, to change the CNF_TESTS_IMAGE with a custom registry run the following command:

  1. $ docker run -v $(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig -e CNF_TESTS_IMAGE="custom-cnf-tests-image:latests" registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/test-run.sh

Ginkgo parameters

The Ginkgo BDD (Behavior-Driven Development) framework serves as the base for the test suite. This means that it accepts parameters for filtering or skipping tests.

You can use the -ginkgo.focus parameter to filter a set of tests:

  1. $ docker run -v $(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/test-run.sh -ginkgo.focus="performance|sctp"

You can run only the latency test using the -ginkgo.focus parameter.

To run only the latency test, you must provide the -ginkgo.focus parameter and the PERF_TEST_PROFILE environment variable that has the name of the PerformanceProfile that needs to be tested. For example:

  1. $ docker run --rm -v $KUBECONFIG:/kubeconfig -e KUBECONFIG=/kubeconfig -e LATENCY_TEST_RUN=true -e LATENCY_TEST_RUNTIME=600 -e OSLAT_MAXIMUM_LATENCY=20 -e PERF_TEST_PROFILE=<performance_profile_name> registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/test-run.sh -ginkgo.focus="\[performance\]\[config\]|\[performance\]\ Latency\ Test"

There is a particular test that requires both SR-IOV and SCTP. Given the selective nature of the focus parameter, this test is triggered by only placing the sriov matcher. If the tests are executed against a cluster where SR-IOV is installed but SCTP is not, adding the -ginkgo.skip=SCTP parameter causes the tests to skip SCTP testing.

Available features

The set of available features to filter are:

  • performance

  • sriov

  • ptp

  • sctp

  • xt_u32

  • dpdk

  • container-mount-namespace

Discovery mode

Discovery mode allows you to validate the functionality of a cluster without altering its configuration. Existing environment configurations are used for the tests. The tests attempt to find the configuration items needed and use those items to execute the tests. If resources needed to run a specific test are not found, the test is skipped, providing an appropriate message to the user. After the tests are finished, no cleanup of the pre-configured configuration items is done, and the test environment can be immediately used for another test run.

Some configuration items are still created by the tests. These are specific items needed for a test to run; for example, a SR-IOV Network. These configuration items are created in custom namespaces and are cleaned up after the tests are executed.

An additional bonus is a reduction in test run times. As the configuration items are already there, no time is needed for environment configuration and stabilization.

To enable discovery mode, the tests must be instructed by setting the DISCOVERY_MODE environment variable as follows:

  1. $ docker run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e
  2. DISCOVERY_MODE=true registry.redhat.io/openshift-kni/cnf-tests /usr/bin/test-run.sh

Required environment configuration prerequisites

SR-IOV tests

Most SR-IOV tests require the following resources:

  • SriovNetworkNodePolicy.

  • At least one with the resource specified by SriovNetworkNodePolicy being allocatable; a resource count of at least 5 is considered sufficient.

Some tests have additional requirements:

  • An unused device on the node with available policy resource, with link state DOWN and not a bridge slave.

  • A SriovNetworkNodePolicy with a MTU value of 9000.

DPDK tests

The DPDK related tests require:

  • A performance profile.

  • A SR-IOV policy.

  • A node with resources available for the SR-IOV policy and available with the PerformanceProfile node selector.

PTP tests

  • A slave PtpConfig (ptp4lOpts="-s" ,phc2sysOpts="-a -r").

  • A node with a label matching the slave PtpConfig.

SCTP tests

  • SriovNetworkNodePolicy.

  • A node matching both the SriovNetworkNodePolicy and a MachineConfig that enables SCTP.

XT_U32 tests

  • A node with a machine config that enables XT_U32.

Performance Operator tests

Various tests have different requirements. Some of them are:

  • A performance profile.

  • A performance profile having profile.Spec.CPU.Isolated = 1.

  • A performance profile having profile.Spec.RealTimeKernel.Enabled == true.

  • A node with no huge pages usage.

Container-mount-namespace tests

  • A node with a machine config which enables container-mount-namespace mode

Limiting the nodes used during tests

The nodes on which the tests are executed can be limited by specifying a NODES_SELECTOR environment variable. Any resources created by the test are then limited to the specified nodes.

  1. $ docker run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e
  2. NODES_SELECTOR=node-role.kubernetes.io/worker-cnf registry.redhat.io/openshift-kni/cnf-tests /usr/bin/test-run.sh

Using a single performance profile

The resources needed by the DPDK tests are higher than those required by the performance test suite. To make the execution faster, the performance profile used by tests can be overridden using one that also serves the DPDK test suite.

To do this, a profile like the following one can be mounted inside the container, and the performance tests can be instructed to deploy it.

  1. apiVersion: performance.openshift.io/v2
  2. kind: PerformanceProfile
  3. metadata:
  4. name: performance
  5. spec:
  6. cpu:
  7. isolated: "4-15"
  8. reserved: "0-3"
  9. hugepages:
  10. defaultHugepagesSize: "1G"
  11. pages:
  12. - size: "1G"
  13. count: 16
  14. node: 0
  15. realTimeKernel:
  16. enabled: true
  17. nodeSelector:
  18. node-role.kubernetes.io/worker-cnf: ""

When you configure reserved and isolated CPUs, the infra containers in pods use the reserved CPUs and the application containers use the isolated CPUs.

To override the performance profile used, the manifest must be mounted inside the container and the tests must be instructed by setting the PERFORMANCE_PROFILE_MANIFEST_OVERRIDE parameter as follows:

  1. $ docker run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e
  2. PERFORMANCE_PROFILE_MANIFEST_OVERRIDE=/kubeconfig/manifest.yaml registry.redhat.io/openshift-kni/cnf-tests /usr/bin/test-run.sh

Disabling the performance profile cleanup

When not running in discovery mode, the suite cleans up all the created artifacts and configurations. This includes the performance profile.

When deleting the performance profile, the machine config pool is modified and nodes are rebooted. After a new iteration, a new profile is created. This causes long test cycles between runs.

To speed up this process, set CLEAN_PERFORMANCE_PROFILE="false" to instruct the tests not to clean the performance profile. In this way, the next iteration will not need to create it and wait for it to be applied.

  1. $ docker run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e
  2. CLEAN_PERFORMANCE_PROFILE="false" registry.redhat.io/openshift-kni/cnf-tests /usr/bin/test-run.sh

Running the latency tests

If the kubeconfig file is in the current folder, you can run the test suite by using the following command:

  1. $ podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e \
  2. DISCOVERY_MODE=true registry.redhat.io/openshift4/cnf-tests-rhel8:v{product-version} \
  3. /usr/bin/test-run.sh -ginkgo.focus="\[performance\]\ Latency\ Test"

This allows the running container to use the kubeconfig file from inside the container.

You must run the latency tests in Discovery mode. The latency tests can change the configuration of your cluster if you do not run in Discovery mode.

In OKD 4.10, you can also run latency tests from the CNF-test container. The latency test allows you to validate node tuning for your workload.

Three tools measure the latency of the system:

  • hwlatdetect

  • cyclictest

  • oslat

Each tool has a specific use. Use the tools in sequence to achieve reliable test results.

  1. The hwlatdetect tool measures the baseline that the bare metal hardware can achieve. Before proceeding with the next latency test, ensure that the number measured by hwlatdetect meets the required threshold because you cannot fix hardware latency spikes by operating system tuning.

  2. The cyclictest tool verifies the real-time kernel scheduler latency after hwlatdetect passes validation. The cyclictest tool schedules a repeated timer and measures the difference between the desired and the actual trigger times. The difference can uncover basic issues with the tuning caused by interrupts or process priorities. The tool must run on a real-time kernel.

  3. The oslat tool behaves similarly to a CPU-intensive DPDK application and measures all the interruptions and disruptions to the busy loop that simulates CPU heavy data processing.

By default, the latency tests are disabled. To enable the latency test, you must add the LATENCY_TEST_RUN environment variable to the test invocation and set its value to true. For example, LATENCY_TEST_RUN=true.

The test introduces the following environment variables:

LATENCY_TEST_DELAY

The variable specifies the amount of time in seconds after which the test starts running. You can use the variable to allow the CPU manager reconcile loop to update the default CPU pool. The default value is 0.

LATENCY_TEST_CPUS

The variable specifies the number of CPUs that the pod running the latency tests uses. If you do not set the variable, the default configuration includes all isolated CPUs.

LATENCY_TEST_RUNTIME

The variable specifies the amount of time in seconds that the latency test must run. The default value is 300 seconds.

HWLATDETECT_MAXIMUM_LATENCY

The variable specifies the maximum acceptable hardware latency in microseconds for the workload and operating system. If you do not set the value of HWLATDETECT_MAXIMUM_LATENCY or MAXIMUM_LATENCY, the tool compares the default expected threshold (20μs) and the actual maximum latency in the tool itself. Then, the test fails or succeeds accordingly.

CYCLICTEST_MAXIMUM_LATENCY

The variable specifies the maximum latency in microseconds that all threads expect before waking up during the cyclictest run. If you do not set the value of CYCLICTEST_MAXIMUM_LATENCY or MAXIMUM_LATENCY, the tool skips the comparison of the expected and the actual maximum latency.

OSLAT_MAXIMUM_LATENCY

The variable specifies the maximum acceptable latency in microseconds for the oslat test results. If you do not set the value of OSLAT_MAXIMUM_LATENCY or MAXIMUM_LATENCY, the tool skips the comparison of the expected and the actual maximum latency.

MAXIMUM_LATENCY

This is a unified variable you can apply for all the available latency tools.

A variable that is specific to certain tests has precedence over the unified variable.

You can use the -ginkgo.v flag to run the tests with verbosity.

You can use the -ginkgo.focus flag to run a specific test.

Running hwlatdetect

The hwlatdetect tool is available in the rt-kernel package with a regular subscription of Red Hat Enterprise Linux 8.

Prerequisites:

  • You installed the real-time kernel

  • You logged into registry.redhat.io with your Customer Portal credentials

Procedure

  1. Run the following command:
  1. $ podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e \
  2. LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e ROLE_WORKER_CNF=worker-cnf -e \
  3. LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 \
  4. /usr/bin/test-run.sh -ginkgo.focus="hwlatdetect"

The command runs the hwlatdetect tool for 10 minutes (600 seconds). The test runs successfully when the maximum observed latency is lower than MAXIMUM_LATENCY (20 μs), and the command line displays SUCCESS! when this test is completed.

For valid results, the test should run for at least 12 hours.

If the results exceed the latency threshold, the test fails and you can see the following output:

Example failure output

  1. $ podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e \
  2. LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e ROLE_WORKER_CNF=worker-cnf -e \
  3. LATENCY_TEST_RUNTIME=10 -e MAXIMUM_LATENCY=1 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 \
  4. /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus="hwlatdetect" (1)
  5. running /usr/bin/validationsuite -ginkgo.v -ginkgo.focus=hwlatdetect
  6. I0210 17:08:38.607699 7 request.go:668] Waited for 1.047200253s due to client-side throttling, not priority and fairness, request: GET:https://api.ocp.demo.lab:6443/apis/apps.openshift.io/v1?timeout=32s
  7. Running Suite: CNF Features e2e validation
  8. ==========================================
  9. Random Seed: 1644512917
  10. Will run 0 of 48 specs
  11. SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  12. Ran 0 of 48 Specs in 0.001 seconds
  13. SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 48 Skipped
  14. PASS
  15. Discovery mode enabled, skipping setup
  16. running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=hwlatdetect
  17. I0210 17:08:41.179269 40 request.go:668] Waited for 1.046001096s due to client-side throttling, not priority and fairness, request: GET:https://api.ocp.demo.lab:6443/apis/storage.k8s.io/v1beta1?timeout=32s
  18. Running Suite: CNF Features e2e integration tests
  19. =================================================
  20. Random Seed: 1644512920
  21. Will run 1 of 151 specs
  22. SSSSSSS
  23. ------------------------------
  24. [performance] Latency Test with the hwlatdetect image
  25. should succeed
  26. /remote-source/app/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:221
  27. STEP: Waiting two minutes to download the latencyTest image
  28. STEP: Waiting another two minutes to give enough time for the cluster to move the pod to Succeeded phase
  29. Feb 10 17:10:56.045: [INFO]: found mcd machine-config-daemon-dzpw7 for node ocp-worker-0.demo.lab
  30. Feb 10 17:10:56.259: [INFO]: found mcd machine-config-daemon-dzpw7 for node ocp-worker-0.demo.lab
  31. Feb 10 17:11:56.825: [ERROR]: timed out waiting for the condition
  32. Failure [193.903 seconds]
  33. [performance] Latency Test
  34. /remote-source/app/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:60
  35. with the hwlatdetect image
  36. /remote-source/app/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:213
  37. should succeed [It]
  38. /remote-source/app/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:221
  39. Log file created at: 2022/02/10 17:08:45
  40. Running on machine: hwlatdetect-cd8b6
  41. Binary: Built with gc go1.16.6 for linux/amd64
  42. Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
  43. I0210 17:08:45.716288 1 node.go:37] Environment information: /proc/cmdline: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-56fabc639a679b757ebae30e5f01b2ebd38e9fde9ecae91c41be41d3e89b37f8/vmlinuz-4.18.0-305.34.2.rt7.107.el8_4.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ignition.platform.id=qemu ostree=/ostree/boot.0/rhcos/56fabc639a679b757ebae30e5f01b2ebd38e9fde9ecae91c41be41d3e89b37f8/0 root=UUID=56731f4f-f558-46a3-85d3-d1b579683385 rw rootflags=prjquota skew_tick=1 nohz=on rcu_nocbs=3-5 tuned.non_isolcpus=ffffffc7 intel_pstate=disable nosoftlockup tsc=nowatchdog intel_iommu=on iommu=pt isolcpus=managed_irq,3-5 systemd.cpu_affinity=0,1,2,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31 + +
  44. I0210 17:08:45.716782 1 node.go:44] Environment information: kernel version 4.18.0-305.34.2.rt7.107.el8_4.x86_64
  45. I0210 17:08:45.716861 1 main.go:50] running the hwlatdetect command with arguments [/usr/bin/hwlatdetect --threshold 1 --hardlimit 1 --duration 10 --window 10000000us --width 950000us]
  46. F0210 17:08:56.815204 1 main.go:53] failed to run hwlatdetect command; out: hwlatdetect: test duration 10 seconds
  47. detector: tracer
  48. parameters:
  49. Latency threshold: 1us (2)
  50. Sample window: 10000000us
  51. Sample width: 950000us
  52. Non-sampling period: 9050000us
  53. Output File: None
  54. Starting test
  55. test finished
  56. Max Latency: 24us (3)
  57. Samples recorded: 1
  58. Samples exceeding threshold: 1
  59. ts: 1644512927.163556381, inner:20, outer:24
  60. ; err: exit status 1
  61. goroutine 1 [running]:
  62. k8s.io/klog.stacks(0xc000010001, 0xc00012e000, 0x25b, 0x2710)
  63. /remote-source/app/vendor/k8s.io/klog/klog.go:875 +0xb9
  64. k8s.io/klog.(*loggingT).output(0x5bed00, 0xc000000003, 0xc0000121c0, 0x53ea81, 0x7, 0x35, 0x0)
  65. /remote-source/app/vendor/k8s.io/klog/klog.go:829 +0x1b0
  66. k8s.io/klog.(*loggingT).printf(0x5bed00, 0x3, 0x5082da, 0x33, 0xc000113f58, 0x2, 0x2)
  67. /remote-source/app/vendor/k8s.io/klog/klog.go:707 +0x153
  68. k8s.io/klog.Fatalf(...)
  69. /remote-source/app/vendor/k8s.io/klog/klog.go:1276
  70. main.main()
  71. /remote-source/app/cnf-tests/pod-utils/hwlatdetect-runner/main.go:53 +0x897
  72. goroutine 6 [chan receive]:
  73. k8s.io/klog.(*loggingT).flushDaemon(0x5bed00)
  74. /remote-source/app/vendor/k8s.io/klog/klog.go:1010 +0x8b
  75. created by k8s.io/klog.init.0
  76. /remote-source/app/vendor/k8s.io/klog/klog.go:411 +0xd8
  77. goroutine 7 [chan receive]:
  78. k8s.io/klog/v2.(*loggingT).flushDaemon(0x5bede0)
  79. /remote-source/app/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
  80. created by k8s.io/klog/v2.init.0
  81. /remote-source/app/vendor/k8s.io/klog/v2/klog.go:420 +0xdf
  82. Unexpected error:
  83. <*errors.errorString | 0xc000418ed0>: {
  84. s: "timed out waiting for the condition",
  85. }
  86. timed out waiting for the condition
  87. occurred
  88. /remote-source/app/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:433
  89. ------------------------------
  90. SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  91. JUnit report was created: /junit.xml/cnftests-junit.xml
  92. Summarizing 1 Failure:
  93. [Fail] [performance] Latency Test with the hwlatdetect image [It] should succeed
  94. /remote-source/app/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:433
  95. Ran 1 of 151 Specs in 222.254 seconds
  96. FAIL! -- 0 Passed | 1 Failed | 0 Pending | 150 Skipped
  97. --- FAIL: TestTest (222.45s)
  98. FAIL
1The podman arguments you provided.
2You can configure the latency threshold by using the MAXIMUM_LATENCY or the HWLATDETECT_MAXIMUM_LATENCY environment variables.
3The maximum latency value measured during the test.
Capturing the results

You can capture the following types of results:

  • Rough results that are gathered after each run to create a history of impact on any changes made throughout the test

  • The combined set of the rough tests with the best results and configuration settings

Example of good results

  1. hwlatdetect: test duration 3600 seconds
  2. detector: tracer
  3. parameters:
  4. Latency threshold: 10us
  5. Sample window: 1000000us
  6. Sample width: 950000us
  7. Non-sampling period: 50000us
  8. Output File: None
  9. Starting test
  10. test finished
  11. Max Latency: Below threshold
  12. Samples recorded: 0

The hwlatdetect tool only provides output if the sample exceeds the specified threshold.

Example of bad results

  1. hwlatdetect: test duration 3600 seconds
  2. detector: tracer
  3. parameters:Latency threshold: 10usSample window: 1000000us
  4. Sample width: 950000usNon-sampling period: 50000usOutput File: None
  5. Starting tests:1610542421.275784439, inner:78, outer:81
  6. ts: 1610542444.330561619, inner:27, outer:28
  7. ts: 1610542445.332549975, inner:39, outer:38
  8. ts: 1610542541.568546097, inner:47, outer:32
  9. ts: 1610542590.681548531, inner:13, outer:17
  10. ts: 1610543033.818801482, inner:29, outer:30
  11. ts: 1610543080.938801990, inner:90, outer:76
  12. ts: 1610543129.065549639, inner:28, outer:39
  13. ts: 1610543474.859552115, inner:28, outer:35
  14. ts: 1610543523.973856571, inner:52, outer:49
  15. ts: 1610543572.089799738, inner:27, outer:30
  16. ts: 1610543573.091550771, inner:34, outer:28
  17. ts: 1610543574.093555202, inner:116, outer:63

The output of hwlatdetect shows that multiple samples exceed the threshold.

However, the same output can indicate different results based on the following factors:

  • The duration of the test

  • The number of CPU cores

  • The BIOS settings

Before proceeding with the next latency test, ensure that the number measured by hwlatdetect meets the required threshold. Fixing latencies introduced by hardware might require you to contact the support of your system vendor.

Running cyclictest

The cyclictest tool measures the real-time kernel scheduler latency on the specified CPUs.

Prerequisites

  • You logged into registry.redhat.io with your Customer Portal credentials

  • You installed the real-time kernel

  • You installed the Performance Add-on Operator and you applied performance profile

Procedure

To perform the cyclictest, run the following command:

  1. $ podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e \
  2. LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e ROLE_WORKER_CNF=worker-cnf -e \
  3. LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \
  4. registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/test-run.sh -ginkgo.focus="cyclictest"

The command runs the cyclictest tool for 10 minutes (600 seconds). The test runs successfully when the maximum observed latency is lower than MAXIMUM_LATENCY (20 μs), and the command line displays SUCCESS! when this test is completed.

For valid results, the test should run for at least 12 hours.

If the results exceed the latency threshold, the test fails and you can see the following output:

Example failure output

  1. $ podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e \
  2. PERF_TEST_PROFILE=<performance_profile_name> -e ROLE_WORKER_CNF=worker-cnf -e \
  3. LATENCY_TEST_RUN=true -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 -e \
  4. LATENCY_TEST_CPUS=10 -e DISCOVERY_MODE=true \
  5. registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 usr/bin/test-run.sh \
  6. -ginkgo.v -ginkgo.focus="cyclictest" (1)
  7. Discovery mode enabled, skipping setup
  8. running /usr/bin//cnftests -ginkgo.v -ginkgo.focus=cyclictest
  9. I0811 15:02:36.350033 20 request.go:668] Waited for 1.049965918s due to client-side throttling, not priority and fairness, request: GET:https://api.cnfdc8.t5g.lab.eng.bos.redhat.com:6443/apis/machineconfiguration.openshift.io/v1?timeout=32s
  10. Running Suite: CNF Features e2e integration tests
  11. =================================================
  12. Random Seed: 1628694153
  13. Will run 1 of 138 specs
  14. SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  15. ------------------------------
  16. [performance] Latency Test with the cyclictest image
  17. should succeed
  18. /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:200
  19. STEP: Waiting two minutes to download the latencyTest image
  20. STEP: Waiting another two minutes to give enough time for the cluster to move the pod to Succeeded phase
  21. Aug 11 15:03:06.826: [INFO]: found mcd machine-config-daemon-wf4w8 for node cnfdc8.clus2.t5g.lab.eng.bos.redhat.com
  22. Failure [22.527 seconds]
  23. [performance] Latency Test
  24. /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:84
  25. with the cyclictest image
  26. /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:188
  27. should succeed [It]
  28. /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:200
  29. The current latency 17 is bigger than the expected one 20 (2)
  30. Expected
  31. <bool>: false
  32. to be true
  33. /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:219
  34. Log file created at: 2021/08/11 15:02:51
  35. Running on machine: cyclictest-knk7d
  36. Binary: Built with gc go1.16.6 for linux/amd64
  37. Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
  38. I0811 15:02:51.092254 1 node.go:37] Environment information: /proc/cmdline: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-612d89f4519a53ad0b1a132f4add78372661bfb3994f5fe115654971aa58a543/vmlinuz-4.18.0-305.10.2.rt7.83.el8_4.x86_64 ip=dhcp random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ostree=/ostree/boot.1/rhcos/612d89f4519a53ad0b1a132f4add78372661bfb3994f5fe115654971aa58a543/0 ignition.platform.id=openstack root=UUID=5a4ddf16-9372-44d9-ac4e-3ee329e16ab3 rw rootflags=prjquota skew_tick=1 nohz=on rcu_nocbs=1-3 tuned.non_isolcpus=000000ff,ffffffff,ffffffff,fffffff1 intel_pstate=disable nosoftlockup tsc=nowatchdog intel_iommu=on iommu=pt isolcpus=managed_irq,1-3 systemd.cpu_affinity=0,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103 default_hugepagesz=1G hugepagesz=2M hugepages=128 nmi_watchdog=0 audit=0 mce=off processor.max_cstate=1 idle=poll intel_idle.max_cstate=0
  39. I0811 15:02:51.092427 1 node.go:44] Environment information: kernel version 4.18.0-305.10.2.rt7.83.el8_4.x86_64
  40. I0811 15:02:51.092450 1 main.go:48] running the cyclictest command with arguments \
  41. [-D 600 -95 1 -t 10 -a 2,4,6,8,10,54,56,58,60,62 -h 30 -i 1000 --quiet] (3)
  42. I0811 15:03:06.147253 1 main.go:54] succeeded to run the cyclictest command: # /dev/cpu_dma_latency set to 0us
  43. # Histogram
  44. 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000
  45. 000001 000000 005561 027778 037704 011987 000000 120755 238981 081847 300186
  46. 000002 587440 581106 564207 554323 577416 590635 474442 357940 513895 296033
  47. 000003 011751 011441 006449 006761 008409 007904 002893 002066 003349 003089
  48. 000004 000527 001079 000914 000712 001451 001120 000779 000283 000350 000251
  49. More histogram entries ...
  50. # Min Latencies: 00002 00001 00001 00001 00001 00002 00001 00001 00001 00001
  51. # Avg Latencies: 00002 00002 00002 00001 00002 00002 00001 00001 00001 00001
  52. # Max Latencies: 00018 00465 00361 00395 00208 00301 02052 00289 00327 00114 (4)
  53. # Histogram Overflows: 00000 00220 00159 00128 00202 00017 00069 00059 00045 00120
  54. # Histogram Overflow at cycle number:
  55. # Thread 0:
  56. # Thread 1: 01142 01439 05305 … # 00190 others
  57. # Thread 2: 20895 21351 30624 … # 00129 others
  58. # Thread 3: 01143 17921 18334 … # 00098 others
  59. # Thread 4: 30499 30622 31566 ... # 00172 others
  60. # Thread 5: 145221 170910 171888 ...
  61. # Thread 6: 01684 26291 30623 ...# 00039 others
  62. # Thread 7: 28983 92112 167011 … 00029 others
  63. # Thread 8: 45766 56169 56171 ...# 00015 others
  64. # Thread 9: 02974 08094 13214 ... # 00090 others
1The podman arguments you provided.
2You can see the measured latency and the configured latency.
3The arguments for the cyclictest command.
4The maximum latencies measured on each thread.
Capturing the results

The same output can indicate different results for different workloads. For example, spikes up to 18μs is acceptable for 4G DU workloads but not for 5G DU workloads. Spikes above 20μs are not acceptable in any case.

Example of good results

  1. running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m
  2. # Histogram
  3. 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000
  4. 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000
  5. 000002 579506 535967 418614 573648 532870 529897 489306 558076 582350 585188 583793 223781 532480 569130 472250 576043
  6. More histogram entries ...
  7. # Total: 000600000 000600000 000600000 000599999 000599999 000599999 000599998 000599998 000599998 000599997 000599997 000599996 000599996 000599995 000599995 000599995
  8. # Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002
  9. # Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002
  10. # Max Latencies: 00005 00005 00004 00005 00004 00004 00005 00005 00006 00005 00004 00005 00004 00004 00005 00004
  11. # Histogram Overflows: 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000
  12. # Histogram Overflow at cycle number:
  13. # Thread 0:
  14. # Thread 1:
  15. # Thread 2:
  16. # Thread 3:
  17. # Thread 4:
  18. # Thread 5:
  19. # Thread 6:
  20. # Thread 7:
  21. # Thread 8:
  22. # Thread 9:
  23. # Thread 10:
  24. # Thread 11:
  25. # Thread 12:
  26. # Thread 13:
  27. # Thread 14:
  28. # Thread 15:

Example of bad results

  1. running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m
  2. # Histogram
  3. 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000
  4. 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000
  5. 000002 564632 579686 354911 563036 492543 521983 515884 378266 592621 463547 482764 591976 590409 588145 589556 353518
  6. More histogram entries ...
  7. # Total: 000599999 000599999 000599999 000599997 000599997 000599998 000599998 000599997 000599997 000599996 000599995 000599996 000599995 000599995 000599995 000599993
  8. # Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002
  9. # Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002
  10. # Max Latencies: 00493 00387 00271 00619 00541 00513 00009 00389 00252 00215 00539 00498 00363 00204 00068 00520
  11. # Histogram Overflows: 00001 00001 00001 00002 00002 00001 00000 00001 00001 00001 00002 00001 00001 00001 00001 00002
  12. # Histogram Overflow at cycle number:
  13. # Thread 0: 155922
  14. # Thread 1: 110064
  15. # Thread 2: 110064
  16. # Thread 3: 110063 155921
  17. # Thread 4: 110063 155921
  18. # Thread 5: 155920
  19. # Thread 6:
  20. # Thread 7: 110062
  21. # Thread 8: 110062
  22. # Thread 9: 155919
  23. # Thread 10: 110061 155919
  24. # Thread 11: 155918
  25. # Thread 12: 155918
  26. # Thread 13: 110060
  27. # Thread 14: 110060
  28. # Thread 15: 110059 155917

Running oslat

Prerequisites

  • You logged into registry.redhat.io with your Customer Portal credentials

  • You installed the Performance Add-on Operator and you applied a performance profile

Procedure

  • To perform the oslat, run the following command:
  1. $ podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e \
  2. LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e ROLE_WORKER_CNF=worker-cnf -e \
  3. LATENCY_TEST_CPUS=7 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \
  4. registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/test-run.sh -ginkgo.focus="oslat"

The command runs the oslat tool for 10 minutes (600 seconds). The test runs successfully when the maximum observed latency is lower than MAXIMUM_LATENCY (20 μs), and the command line displays SUCCESS! when this test is completed.

If the results exceed the latency threshold, the test fails and you can see the following output:

Example failure output

  1. $ podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e \
  2. IMAGE_REGISTRY="registry.redhat.io/openshift4/" -e CNF_TESTS_IMAGE=cnf-tests-rhel8:v4.10 -e \
  3. PERF_TEST_PROFILE=<performance_profile_name> -e ROLE_WORKER_CNF=worker-cnf -e \
  4. LATENCY_TEST_RUN=true -e LATENCY_TEST_RUNTIME=600 -e DISCOVERY_MODE=true -e \
  5. MAXIMUM_LATENCY=20 -e LATENCY_TEST_CPUS=7 \
  6. registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 \
  7. usr/bin/test-run.sh -ginkgo.v -ginkgo.focus="oslat" (1)
  8. running /usr/bin//validationsuite -ginkgo.v -ginkgo.focus=oslat
  9. I0829 12:36:55.386776 8 request.go:668] Waited for 1.000303471s due to client-side throttling, not priority and fairness, request: GET:https://api.cnfdc8.t5g.lab.eng.bos.redhat.com:6443/apis/authentication.k8s.io/v1?timeout=32s
  10. Running Suite: CNF Features e2e validation
  11. ==========================================
  12. Discovery mode enabled, skipping setup
  13. running /usr/bin//cnftests -ginkgo.v -ginkgo.focus=oslat
  14. I0829 12:37:01.219077 20 request.go:668] Waited for 1.050010755s due to client-side throttling, not priority and fairness, request: GET:https://api.cnfdc8.t5g.lab.eng.bos.redhat.com:6443/apis/snapshot.storage.k8s.io/v1beta1?timeout=32s
  15. Running Suite: CNF Features e2e integration tests
  16. =================================================
  17. Random Seed: 1630240617
  18. Will run 1 of 142 specs
  19. SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  20. ------------------------------
  21. [performance] Latency Test with the oslat image
  22. should succeed
  23. /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:134
  24. STEP: Waiting two minutes to download the latencyTest image
  25. STEP: Waiting another two minutes to give enough time for the cluster to move the pod to Succeeded phase
  26. Aug 29 12:37:59.324: [INFO]: found mcd machine-config-daemon-wf4w8 for node cnfdc8.clus2.t5g.lab.eng.bos.redhat.com
  27. Failure [49.246 seconds]
  28. [performance] Latency Test
  29. /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:59
  30. with the oslat image
  31. /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:112
  32. should succeed [It]
  33. /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:134
  34. The current latency 27 is bigger than the expected one 20 (2)
  35. Expected
  36. <bool>: false
  37. to be true
  38. /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:168
  39. Log file created at: 2021/08/29 13:25:21
  40. Running on machine: oslat-57c2g
  41. Binary: Built with gc go1.16.6 for linux/amd64
  42. Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
  43. I0829 13:25:21.569182 1 node.go:37] Environment information: /proc/cmdline: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-612d89f4519a53ad0b1a132f4add78372661bfb3994f5fe115654971aa58a543/vmlinuz-4.18.0-305.10.2.rt7.83.el8_4.x86_64 ip=dhcp random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ostree=/ostree/boot.0/rhcos/612d89f4519a53ad0b1a132f4add78372661bfb3994f5fe115654971aa58a543/0 ignition.platform.id=openstack root=UUID=5a4ddf16-9372-44d9-ac4e-3ee329e16ab3 rw rootflags=prjquota skew_tick=1 nohz=on rcu_nocbs=1-3 tuned.non_isolcpus=000000ff,ffffffff,ffffffff,fffffff1 intel_pstate=disable nosoftlockup tsc=nowatchdog intel_iommu=on iommu=pt isolcpus=managed_irq,1-3 systemd.cpu_affinity=0,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103 default_hugepagesz=1G hugepagesz=2M hugepages=128 nmi_watchdog=0 audit=0 mce=off processor.max_cstate=1 idle=poll intel_idle.max_cstate=0
  44. I0829 13:25:21.569345 1 node.go:44] Environment information: kernel version 4.18.0-305.10.2.rt7.83.el8_4.x86_64
  45. I0829 13:25:21.569367 1 main.go:53] Running the oslat command with arguments \
  46. [--duration 600 --rtprio 1 --cpu-list 4,6,52,54,56,58 --cpu-main-thread 2] (1)
  47. I0829 13:35:22.632263 1 main.go:59] Succeeded to run the oslat command: oslat V 2.00
  48. Total runtime: 600 seconds
  49. Thread priority: SCHED_FIFO:1
  50. CPU list: 4,6,52,54,56,58
  51. CPU for main thread: 2
  52. Workload: no
  53. Workload mem: 0 (KiB)
  54. Preheat cores: 6
  55. Pre-heat for 1 seconds...
  56. Test starts...
  57. Test completed.
  58. Core: 4 6 52 54 56 58
  59. CPU Freq: 2096 2096 2096 2096 2096 2096 (Mhz)
  60. 001 (us): 19390720316 19141129810 20265099129 20280959461 19391991159 19119877333
  61. 002 (us): 5304 5249 5777 5947 6829 4971
  62. 003 (us): 28 14 434 47 208 21
  63. 004 (us): 1388 853 123568 152817 5576 0
  64. 005 (us): 207850 223544 103827 91812 227236 231563
  65. 006 (us): 60770 122038 277581 323120 122633 122357
  66. 007 (us): 280023 223992 63016 25896 214194 218395
  67. 008 (us): 40604 25152 24368 4264 24440 25115
  68. 009 (us): 6858 3065 5815 810 3286 2116
  69. 010 (us): 1947 936 1452 151 474 361
  70. ...
  71. Minimum: 1 1 1 1 1 1 (us)
  72. Average: 1.000 1.000 1.000 1.000 1.000 1.000 (us)
  73. Maximum: 37 38 49 28 28 19 (us) (3)
  74. Max-Min: 36 37 48 27 27 18 (us)
  75. Duration: 599.667 599.667 599.667 599.667 599.667 599.667 (sec)
1The list of CPUs running the oslat command. The LATENCY_TEST_CPUS variable providessSeven CPUs. You can only see six CPUs in total because one runs the oslat tool.
2You can see the measured latency and the configured latency.
3The maximum latency values in microseconds that each CPU measures.

Troubleshooting

The cluster must be reached from within the container. You can verify this by running:

  1. $ docker run -v $(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig
  2. registry.redhat.io/openshift-kni/cnf-tests oc get nodes

If this does not work, it could be caused by spanning across DNS, MTU size, or firewall issues.

Test reports

CNF end-to-end tests produce two outputs: a JUnit test output and a test failure report.

JUnit test output

A JUnit-compliant XML is produced by passing the --junit parameter together with the path where the report is dumped:

  1. $ docker run -v $(pwd)/:/kubeconfig -v $(pwd)/junitdest:/path/to/junit -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/test-run.sh --junit /path/to/junit

Test failure report

A report with information about the cluster state and resources for troubleshooting can be produced by passing the --report parameter with the path where the report is dumped:

  1. $ docker run -v $(pwd)/:/kubeconfig -v $(pwd)/reportdest:/path/to/report -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/test-run.sh --report /path/to/report

A note on podman

When executing podman as non root and non privileged, mounting paths can fail with “permission denied” errors. To make it work, append :Z to the volumes creation; for example, -v $(pwd)/:/kubeconfig:Z to allow podman to do the proper SELinux relabeling.

Running on OKD 4.4

With the exception of the following, the CNF end-to-end tests are compatible with OKD 4.4:

  1. [test_id:28466][crit:high][vendor:cnf-qe@redhat.com][level:acceptance] Should contain configuration injected through openshift-node-performance profile
  2. [test_id:28467][crit:high][vendor:cnf-qe@redhat.com][level:acceptance] Should contain configuration injected through the openshift-node-performance profile

You can skip these tests by adding the -ginkgo.skip “28466|28467" parameter.

Using a single performance profile

The DPDK tests require more resources than what is required by the performance test suite. To make the execution faster, you can override the performance profile used by the tests using a profile that also serves the DPDK test suite.

To do this, use a profile like the following one that can be mounted inside the container, and the performance tests can be instructed to deploy it.

  1. apiVersion: performance.openshift.io/v2
  2. kind: PerformanceProfile
  3. metadata:
  4. name: performance
  5. spec:
  6. cpu:
  7. isolated: "5-15"
  8. reserved: "0-4"
  9. hugepages:
  10. defaultHugepagesSize: "1G"
  11. pages:
  12. - size: "1G"
  13. count: 16
  14. node: 0
  15. realTimeKernel:
  16. enabled: true
  17. numa:
  18. topologyPolicy: "best-effort"
  19. nodeSelector:
  20. node-role.kubernetes.io/worker-cnf: ""

When you configure reserved and isolated CPUs, the infra containers in pods use the reserved CPUs and the application containers use the isolated CPUs.

To override the performance profile, the manifest must be mounted inside the container and the tests must be instructed by setting the PERFORMANCE_PROFILE_MANIFEST_OVERRIDE:

  1. $ docker run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e PERFORMANCE_PROFILE_MANIFEST_OVERRIDE=/kubeconfig/manifest.yaml registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/test-run.sh

Debugging low latency CNF tuning status

The PerformanceProfile custom resource (CR) contains status fields for reporting tuning status and debugging latency degradation issues. These fields report on conditions that describe the state of the operator’s reconciliation functionality.

A typical issue can arise when the status of machine config pools that are attached to the performance profile are in a degraded state, causing the PerformanceProfile status to degrade. In this case, the machine config pool issues a failure message.

The Performance Addon Operator contains the performanceProfile.spec.status.Conditions status field:

  1. Status:
  2. Conditions:
  3. Last Heartbeat Time: 2020-06-02T10:01:24Z
  4. Last Transition Time: 2020-06-02T10:01:24Z
  5. Status: True
  6. Type: Available
  7. Last Heartbeat Time: 2020-06-02T10:01:24Z
  8. Last Transition Time: 2020-06-02T10:01:24Z
  9. Status: True
  10. Type: Upgradeable
  11. Last Heartbeat Time: 2020-06-02T10:01:24Z
  12. Last Transition Time: 2020-06-02T10:01:24Z
  13. Status: False
  14. Type: Progressing
  15. Last Heartbeat Time: 2020-06-02T10:01:24Z
  16. Last Transition Time: 2020-06-02T10:01:24Z
  17. Status: False
  18. Type: Degraded

The Status field contains Conditions that specify Type values that indicate the status of the performance profile:

Available

All machine configs and Tuned profiles have been created successfully and are available for cluster components are responsible to process them (NTO, MCO, Kubelet).

Upgradeable

Indicates whether the resources maintained by the Operator are in a state that is safe to upgrade.

Progressing

Indicates that the deployment process from the performance profile has started.

Degraded

Indicates an error if:

  • Validation of the performance profile has failed.

  • Creation of all relevant components did not complete successfully.

Each of these types contain the following fields:

Status

The state for the specific type (true or false).

Timestamp

The transaction timestamp.

Reason string

The machine readable reason.

Message string

The human readable reason describing the state and error details, if any.

Machine config pools

A performance profile and its created products are applied to a node according to an associated machine config pool (MCP). The MCP holds valuable information about the progress of applying the machine configurations created by performance addons that encompass kernel args, kube config, huge pages allocation, and deployment of rt-kernel. The performance addons controller monitors changes in the MCP and updates the performance profile status accordingly.

The only conditions returned by the MCP to the performance profile status is when the MCP is Degraded, which leads to performaceProfile.status.condition.Degraded = true.

Example

The following example is for a performance profile with an associated machine config pool (worker-cnf) that was created for it:

  1. The associated machine config pool is in a degraded state:

    1. # oc get mcp

    Example output

    1. NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
    2. master rendered-master-2ee57a93fa6c9181b546ca46e1571d2d True False False 3 3 3 0 2d21h
    3. worker rendered-worker-d6b2bdc07d9f5a59a6b68950acf25e5f True False False 2 2 2 0 2d21h
    4. worker-cnf rendered-worker-cnf-6c838641b8a08fff08dbd8b02fb63f7c False True True 2 1 1 1 2d20h
  2. The describe section of the MCP shows the reason:

    1. # oc describe mcp worker-cnf

    Example output

    1. Message: Node node-worker-cnf is reporting: "prepping update:
    2. machineconfig.machineconfiguration.openshift.io \"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\" not
    3. found"
    4. Reason: 1 nodes are reporting degraded status on sync
  3. The degraded state should also appear under the performance profile status field marked as degraded = true:

    1. # oc describe performanceprofiles performance

    Example output

    1. Message: Machine config pool worker-cnf Degraded Reason: 1 nodes are reporting degraded status on sync.
    2. Machine config pool worker-cnf Degraded Message: Node yquinn-q8s5v-w-b-z5lqn.c.openshift-gce-devel.internal is
    3. reporting: "prepping update: machineconfig.machineconfiguration.openshift.io
    4. \"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\" not found". Reason: MCPDegraded
    5. Status: True
    6. Type: Degraded

Collecting low latency tuning debugging data for Red Hat Support

When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.

The must-gather tool enables you to collect diagnostic information about your OKD cluster, including node tuning, NUMA topology, and other information needed to debug issues with low latency setup.

For prompt support, supply diagnostic information for both OKD and low latency tuning.

About the must-gather tool

The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, such as:

  • Resource definitions

  • Audit logs

  • Service logs

You can specify one or more images when you run the command by including the --image argument. When you specify an image, the tool collects data related to that feature or product. When you run oc adm must-gather, a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local. This directory is created in your current working directory.

About collecting low latency tuning data

Use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with low latency tuning, including:

  • The Performance Addon Operator namespaces and child objects.

  • MachineConfigPool and associated MachineConfig objects.

  • The Node Tuning Operator and associated Tuned objects.

  • Linux Kernel command line options.

  • CPU and NUMA topology

  • Basic PCI device information and NUMA locality.

To collect Performance Addon Operator debugging information with must-gather, you must specify the Performance Addon Operator must-gather image:

  1. --image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.10.

Gathering data about specific features

You can gather debugging information about specific features by using the oc adm must-gather CLI command with the --image or --image-stream argument. The must-gather tool supports multiple images, so you can gather data about more than one feature by running a single command.

To collect the default must-gather data in addition to specific feature data, add the —image-stream=openshift/must-gather argument.

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.

  • The OKD CLI (oc) installed.

Procedure

  1. Navigate to the directory where you want to store the must-gather data.

  2. Run the oc adm must-gather command with one or more --image or --image-stream arguments. For example, the following command gathers both the default cluster data and information specific to the Performance Addon Operator:

    1. $ oc adm must-gather \
    2. --image-stream=openshift/must-gather \ (1)
    3. --image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.10 (2)
    1The default OKD must-gather image.
    2The must-gather image for low latency tuning diagnostics.
  3. Create a compressed file from the must-gather directory that was created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:

    1. $ tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ (1)
    1Replace must-gather-local.5421342344627712289/ with the actual directory name.
  4. Attach the compressed file to your support case on the Red Hat Customer Portal.

Additional resources