OKD cluster checkup framework

OKD Virtualization includes predefined checkups that can be used for cluster maintenance and troubleshooting.

The OKD cluster checkup framework is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

About the OKD cluster checkup framework

A checkup is an automated test workload that allows you to verify if a specific cluster functionality works as expected. The cluster checkup framework uses native Kubernetes resources to configure and execute the checkup.

By using predefined checkups, cluster administrators and developers can improve cluster maintainability, troubleshoot unexpected behavior, minimize errors, and save time. They can also review the results of the checkup and share them with experts for further analysis. Vendors can write and publish checkups for features or services that they provide and verify that their customer environments are configured correctly.

Running a predefined checkup in an existing namespace involves setting up a service account for the checkup, creating the Role and RoleBinding objects for the service account, enabling permissions for the checkup, and creating the input config map and the checkup job. You can run a checkup multiple times.

You must always:

  • Verify that the checkup image is from a trustworthy source before applying it.

  • Review the checkup permissions before creating the Role and RoleBinding objects.

Virtual machine latency checkup

You use a predefined checkup to verify network connectivity and measure latency between two virtual machines (VMs) that are attached to a secondary network interface. The latency checkup uses the ping utility.

You run a latency checkup by performing the following steps:

  1. Create a service account, roles, and rolebindings to provide cluster access permissions to the latency checkup.

  2. Create a config map to provide the input to run the checkup and to store the results.

  3. Create a job to run the checkup.

  4. Review the results in the config map.

  5. Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.

  6. When you are finished, delete the latency checkup resources.

Prerequisites

  • You installed the OpenShift CLI (oc).

  • The cluster has at least two worker nodes.

  • The Multus Container Network Interface (CNI) plugin is installed on the cluster.

  • You configured a network attachment definition for a namespace.

Procedure

  1. Create a ServiceAccount, Role, and RoleBinding manifest for the latency checkup:

    Example role manifest file

    1. ---
    2. apiVersion: v1
    3. kind: ServiceAccount
    4. metadata:
    5. name: vm-latency-checkup-sa
    6. ---
    7. apiVersion: rbac.authorization.k8s.io/v1
    8. kind: Role
    9. metadata:
    10. name: kubevirt-vm-latency-checker
    11. rules:
    12. - apiGroups: ["kubevirt.io"]
    13. resources: ["virtualmachineinstances"]
    14. verbs: ["get", "create", "delete"]
    15. - apiGroups: ["subresources.kubevirt.io"]
    16. resources: ["virtualmachineinstances/console"]
    17. verbs: ["get"]
    18. - apiGroups: ["k8s.cni.cncf.io"]
    19. resources: ["network-attachment-definitions"]
    20. verbs: ["get"]
    21. ---
    22. apiVersion: rbac.authorization.k8s.io/v1
    23. kind: RoleBinding
    24. metadata:
    25. name: kubevirt-vm-latency-checker
    26. subjects:
    27. - kind: ServiceAccount
    28. name: vm-latency-checkup-sa
    29. roleRef:
    30. kind: Role
    31. name: kubevirt-vm-latency-checker
    32. apiGroup: rbac.authorization.k8s.io
    33. ---
    34. apiVersion: rbac.authorization.k8s.io/v1
    35. kind: Role
    36. metadata:
    37. name: kiagnose-configmap-access
    38. rules:
    39. - apiGroups: [ "" ]
    40. resources: [ "configmaps" ]
    41. verbs: ["get", "update"]
    42. ---
    43. apiVersion: rbac.authorization.k8s.io/v1
    44. kind: RoleBinding
    45. metadata:
    46. name: kiagnose-configmap-access
    47. subjects:
    48. - kind: ServiceAccount
    49. name: vm-latency-checkup-sa
    50. roleRef:
    51. kind: Role
    52. name: kiagnose-configmap-access
    53. apiGroup: rbac.authorization.k8s.io
  2. Apply the ServiceAccount, Role, and RoleBinding manifest:

    1. $ oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yaml (1)
    1<target_namespace> is the namespace where the checkup is to be run. This must be an existing namespace where the NetworkAttachmentDefinition object resides.
  3. Create a ConfigMap manifest that contains the input parameters for the checkup:

    Example input config map

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: kubevirt-vm-latency-checkup-config
    5. data:
    6. spec.timeout: 5m
    7. spec.param.network_attachment_definition_namespace: <target_namespace>
    8. spec.param.network_attachment_definition_name: "blue-network" (1)
    9. spec.param.max_desired_latency_milliseconds: "10" (2)
    10. spec.param.sample_duration_seconds: "5" (3)
    11. spec.param.source_node: "worker1" (4)
    12. spec.param.target_node: "worker2" (5)
    1The name of the NetworkAttachmentDefinition object.
    2Optional: The maximum desired latency, in milliseconds, between the virtual machines. If the measured latency exceeds this value, the checkup fails.
    3Optional: The duration of the latency check, in seconds.
    4Optional: When specified, latency is measured from this node to the target node. If the source node is specified, the spec.param.target_node field cannot be empty.
    5Optional: When specified, latency is measured from the source node to this node.
  4. Apply the config map manifest in the target namespace:

    1. $ oc apply -n <target_namespace> -f <latency_config_map>.yaml
  5. Create a Job manifest to run the checkup:

    Example job manifest

    1. apiVersion: batch/v1
    2. kind: Job
    3. metadata:
    4. name: kubevirt-vm-latency-checkup
    5. spec:
    6. backoffLimit: 0
    7. template:
    8. spec:
    9. serviceAccountName: vm-latency-checkup-sa
    10. restartPolicy: Never
    11. containers:
    12. - name: vm-latency-checkup
    13. image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup:v4.13.0
    14. securityContext:
    15. allowPrivilegeEscalation: false
    16. capabilities:
    17. drop: ["ALL"]
    18. runAsNonRoot: true
    19. seccompProfile:
    20. type: "RuntimeDefault"
    21. env:
    22. - name: CONFIGMAP_NAMESPACE
    23. value: <target_namespace>
    24. - name: CONFIGMAP_NAME
    25. value: kubevirt-vm-latency-checkup-config
  6. Apply the Job manifest:

    1. $ oc apply -n <target_namespace> -f <latency_job>.yaml
  7. Wait for the job to complete:

    1. $ oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6m
  8. Review the results of the latency checkup by running the following command. If the maximum measured latency is greater than the value of the spec.param.max_desired_latency_milliseconds attribute, the checkup fails and returns an error.

    1. $ oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yaml

    Example output config map (success)

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: kubevirt-vm-latency-checkup-config
    5. namespace: <target_namespace>
    6. data:
    7. spec.timeout: 5m
    8. spec.param.network_attachment_definition_namespace: <target_namespace>
    9. spec.param.network_attachment_definition_name: "blue-network"
    10. spec.param.max_desired_latency_milliseconds: "10"
    11. spec.param.sample_duration_seconds: "5"
    12. spec.param.source_node: "worker1"
    13. spec.param.target_node: "worker2"
    14. status.succeeded: "true"
    15. status.failureReason: ""
    16. status.completionTimestamp: "2022-01-01T09:00:00Z"
    17. status.startTimestamp: "2022-01-01T09:00:07Z"
    18. status.result.avgLatencyNanoSec: "177000"
    19. status.result.maxLatencyNanoSec: "244000" (1)
    20. status.result.measurementDurationSec: "5"
    21. status.result.minLatencyNanoSec: "135000"
    22. status.result.sourceNode: "worker1"
    23. status.result.targetNode: "worker2"
    1The maximum measured latency in nanoseconds.
  9. Optional: To view the detailed job log in case of checkup failure, use the following command:

    1. $ oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>
  10. Delete the job and config map that you previously created by running the following commands:

    1. $ oc delete job -n <target_namespace> kubevirt-vm-latency-checkup
    1. $ oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-config
  11. Optional: If you do not plan to run another checkup, delete the roles manifest:

    1. $ oc delete -f <latency_sa_roles_rolebinding>.yaml

DPDK checkup

Use a predefined checkup to verify that your OKD cluster node can run a virtual machine (VM) with a Data Plane Development Kit (DPDK) workload with zero packet loss. The DPDK checkup runs traffic between a traffic generator pod and a VM running a test DPDK application.

You run a DPDK checkup by performing the following steps:

  1. Create a service account, role, and role bindings for the DPDK checkup and a service account for the traffic generator pod.

  2. Create a security context constraints resource for the traffic generator pod.

  3. Create a config map to provide the input to run the checkup and to store the results.

  4. Create a job to run the checkup.

  5. Review the results in the config map.

  6. Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.

  7. When you are finished, delete the DPDK checkup resources.

Prerequisites

  • You have access to the cluster as a user with cluster-admin permissions.

  • You have installed the OpenShift CLI (oc).

  • You have configured the compute nodes to run DPDK applications on VMs with zero packet loss.

Procedure

  1. Create a ServiceAccount, Role, and RoleBinding manifest for the DPDK checkup and the traffic generator pod:

    Example service account, role, and rolebinding manifest file

    1. ---
    2. apiVersion: v1
    3. kind: ServiceAccount
    4. metadata:
    5. name: dpdk-checkup-sa
    6. ---
    7. apiVersion: rbac.authorization.k8s.io/v1
    8. kind: Role
    9. metadata:
    10. name: kiagnose-configmap-access
    11. rules:
    12. - apiGroups: [ "" ]
    13. resources: [ "configmaps" ]
    14. verbs: [ "get", "update" ]
    15. ---
    16. apiVersion: rbac.authorization.k8s.io/v1
    17. kind: RoleBinding
    18. metadata:
    19. name: kiagnose-configmap-access
    20. subjects:
    21. - kind: ServiceAccount
    22. name: dpdk-checkup-sa
    23. roleRef:
    24. apiGroup: rbac.authorization.k8s.io
    25. kind: Role
    26. name: kiagnose-configmap-access
    27. ---
    28. apiVersion: rbac.authorization.k8s.io/v1
    29. kind: Role
    30. metadata:
    31. name: kubevirt-dpdk-checker
    32. rules:
    33. - apiGroups: [ "kubevirt.io" ]
    34. resources: [ "virtualmachineinstances" ]
    35. verbs: [ "create", "get", "delete" ]
    36. - apiGroups: [ "subresources.kubevirt.io" ]
    37. resources: [ "virtualmachineinstances/console" ]
    38. verbs: [ "get" ]
    39. - apiGroups: [ "" ]
    40. resources: [ "pods" ]
    41. verbs: [ "create", "get", "delete" ]
    42. - apiGroups: [ "" ]
    43. resources: [ "pods/exec" ]
    44. verbs: [ "create" ]
    45. - apiGroups: [ "k8s.cni.cncf.io" ]
    46. resources: [ "network-attachment-definitions" ]
    47. verbs: [ "get" ]
    48. ---
    49. apiVersion: rbac.authorization.k8s.io/v1
    50. kind: RoleBinding
    51. metadata:
    52. name: kubevirt-dpdk-checker
    53. subjects:
    54. - kind: ServiceAccount
    55. name: dpdk-checkup-sa
    56. roleRef:
    57. apiGroup: rbac.authorization.k8s.io
    58. kind: Role
    59. name: kubevirt-dpdk-checker
    60. ---
    61. apiVersion: v1
    62. kind: ServiceAccount
    63. metadata:
    64. name: dpdk-checkup-traffic-gen-sa
  2. Apply the ServiceAccount, Role, and RoleBinding manifest:

    1. $ oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yaml
  3. Create a SecurityContextConstraints manifest for the traffic generator pod:

    Example security context constraints manifest

    1. apiVersion: security.openshift.io/v1
    2. kind: SecurityContextConstraints
    3. metadata:
    4. name: dpdk-checkup-traffic-gen
    5. allowHostDirVolumePlugin: true
    6. allowHostIPC: false
    7. allowHostNetwork: false
    8. allowHostPID: false
    9. allowHostPorts: false
    10. allowPrivilegeEscalation: false
    11. allowPrivilegedContainer: false
    12. allowedCapabilities:
    13. - IPC_LOCK
    14. - NET_ADMIN
    15. - NET_RAW
    16. - SYS_RESOURCE
    17. defaultAddCapabilities: null
    18. fsGroup:
    19. type: RunAsAny
    20. groups: []
    21. readOnlyRootFilesystem: false
    22. requiredDropCapabilities: null
    23. runAsUser:
    24. type: RunAsAny
    25. seLinuxContext:
    26. type: RunAsAny
    27. seccompProfiles:
    28. - runtime/default
    29. - unconfined
    30. supplementalGroups:
    31. type: RunAsAny
    32. users:
    33. - system:serviceaccount:dpdk-checkup-ns:dpdk-checkup-traffic-gen-sa
  4. Apply the SecurityContextConstraints manifest:

    1. $ oc apply -f <dpdk_scc>.yaml
  5. Create a ConfigMap manifest that contains the input parameters for the checkup:

    Example input config map

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: dpdk-checkup-config
    5. data:
    6. spec.timeout: 10m
    7. spec.param.networkAttachmentDefinitionName: <network_name> (1)
    8. spec.param.trafficGeneratorRuntimeClassName: <runtimeclass_name> (2)
    9. spec.param.trafficGeneratorImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.1.1" (3)
    10. spec.param.vmContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.1.1" (4)
    1The name of the NetworkAttachmentDefinition object.
    2The RuntimeClass resource that the traffic generator pod uses.
    3The container image for the traffic generator. In this example, the image is pulled from the upstream Project Quay Container Registry.
    4The container disk image for the VM. In this example, the image is pulled from the upstream Project Quay Container Registry.
  6. Apply the ConfigMap manifest in the target namespace:

    1. $ oc apply -n <target_namespace> -f <dpdk_config_map>.yaml
  7. Create a Job manifest to run the checkup:

    Example job manifest

    1. apiVersion: batch/v1
    2. kind: Job
    3. metadata:
    4. name: dpdk-checkup
    5. spec:
    6. backoffLimit: 0
    7. template:
    8. spec:
    9. serviceAccountName: dpdk-checkup-sa
    10. restartPolicy: Never
    11. containers:
    12. - name: dpdk-checkup
    13. image: brew.registry.redhat.io/rh-osbs/container-native-virtualization-kubevirt-dpdk-checkup-rhel9:v4.13.0
    14. imagePullPolicy: Always
    15. securityContext:
    16. allowPrivilegeEscalation: false
    17. capabilities:
    18. drop: ["ALL"]
    19. runAsNonRoot: true
    20. seccompProfile:
    21. type: "RuntimeDefault"
    22. env:
    23. - name: CONFIGMAP_NAMESPACE
    24. value: <target-namespace>
    25. - name: CONFIGMAP_NAME
    26. value: dpdk-checkup-config
    27. - name: POD_UID
    28. valueFrom:
    29. fieldRef:
    30. fieldPath: metadata.uid
  8. Apply the Job manifest:

    1. $ oc apply -n <target_namespace> -f <dpdk_job>.yaml
  9. Wait for the job to complete:

    1. $ oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10m
  10. Review the results of the checkup by running the following command:

    1. $ oc get configmap dpdk-checkup-config -n <target_namespace> -o yaml

    Example output config map (success)

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: dpdk-checkup-config
    5. data:
    6. spec.timeout: 1h2m
    7. spec.param.NetworkAttachmentDefinitionName: "mlx-dpdk-network-1"
    8. spec.param.trafficGeneratorRuntimeClassName: performance-performance-zeus10
    9. spec.param.trafficGeneratorImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.1.1"
    10. spec.param.vmContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.1.1"
    11. status.succeeded: true
    12. status.failureReason: " "
    13. status.startTimestamp: 2022-12-21T09:33:06+00:00
    14. status.completionTimestamp: 2022-12-21T11:33:06+00:00
    15. status.result.actualTrafficGeneratorTargetNode: worker-dpdk1
    16. status.result.actualDPDKVMTargetNode: worker-dpdk2
    17. status.result.dropRate: 0
  11. Delete the job and config map that you previously created by running the following commands:

    1. $ oc delete job -n <target_namespace> dpdk-checkup
    1. $ oc delete config-map -n <target_namespace> dpdk-checkup-config
  12. Optional: If you do not plan to run another checkup, delete the ServiceAccount, Role, and RoleBinding manifest:

    1. $ oc delete -f <dpdk_sa_roles_rolebinding>.yaml

DPDK checkup config map parameters

The following table shows the mandatory and optional parameters that you can set in the data stanza of the input ConfigMap manifest when you run a cluster DPDK readiness checkup:

Table 1. DPDK checkup config map parameters
ParameterDescriptionIs Mandatory

spec.timeout

The time, in minutes, before the checkup fails.

True

spec.param.networkAttachmentDefinitionName

The name of the NetworkAttachmentDefinition object of the SR-IOV NICs connected.

True

spec.param.trafficGeneratorRuntimeClassName

The RuntimeClass resource that the traffic generator pod uses.

True

spec.param.trafficGeneratorImage

The container image for the traffic generator. The default value is quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:main.

False

spec.param.trafficGeneratorNodeSelector

The node on which the traffic generator pod is to be scheduled. The node should be configured to allow DPDK traffic.

False

spec.param.trafficGeneratorPacketsPerSecond

The number of packets per second, in kilo (k) or million(m). The default value is 14m.

False

spec.param.trafficGeneratorEastMacAddress

The MAC address of the NIC connected to the traffic generator pod or VM. The default value is a random MAC address in the format 50:xx:xx:xx:xx:01.

False

spec.param.trafficGeneratorWestMacAddress

The MAC address of the NIC connected to the traffic generator pod or VM. The default value is a random MAC address in the format 50:xx:xx:xx:xx:02.

False

spec.param.vmContainerDiskImage

The container disk image for the VM. The default value is quay.io/kiagnose/kubevirt-dpdk-checkup-vm:main.

False

spec.param.DPDKLabelSelector

The label of the node on which the VM runs. The node should be configured to allow DPDK traffic.

False

spec.param.DPDKEastMacAddress

The MAC address of the NIC that is connected to the VM. The default value is a random MAC address in the format 60:xx:xx:xx:xx:01.

False

spec.param.DPDKWestMacAddress

The MAC address of the NIC that is connected to the VM. The default value is a random MAC address in the format 60:xx:xx:xx:xx:02.

False

spec.param.testDuration

The duration, in minutes, for which the traffic generator runs. The default value is 5 minutes.

False

spec.param.portBandwidthGB

The maximum bandwidth of the SR-IOV NIC. The default value is 10GB.

False

spec.param.verbose

When set to true, it increases the verbosity of the checkup log. The default value is false.

False

Building a container disk image for Fedora virtual machines

You can build a custom Fedora 8 OS image in qcow2 format and use it to create a container disk image. You can store the container disk image in a registry that is accessible from your cluster and specify the image location in the spec.param.vmContainerDiskImage attribute of the DPDK checkup config map.

To build a container disk image, you must create an image builder virtual machine (VM). The image builder VM is a Fedora 8 VM that can be used to build custom Fedora images.

Prerequisites

  • The image builder VM must run Fedora 8.7 and must have a minimum of 2 CPU cores, 4 GiB RAM, and 20 GB of free space in the /var directory.

  • You have installed the image builder tool and its CLI (composer-cli) on the VM.

  • You have installed the virt-customize tool:

    1. # dnf install libguestfs-tools
  • You have installed the Podman CLI tool (podman).

Procedure

  1. Verify that you can build a Fedora 8.7 image:

    1. # composer-cli distros list

    To run the composer-cli commands as non-root, add your user to the weldr or root groups:

    1. # usermod -a -G weldr user
    1. $ newgrp weldr
  2. Enter the following command to create an image blueprint file in TOML format that contains the packages to be installed, kernel customizations, and the services to be disabled during boot time:

    1. $ cat << EOF > dpdk-vm.toml
    2. name = "dpdk_image"
    3. description = "Image to use with the DPDK checkup"
    4. version = "0.0.1"
    5. distro = "rhel-87"
    6. [[packages]]
    7. name = "dpdk"
    8. [[packages]]
    9. name = "dpdk-tools"
    10. [[packages]]
    11. name = "driverctl"
    12. [[packages]]
    13. name = "tuned-profiles-cpu-partitioning"
    14. [customizations.kernel]
    15. append = "default_hugepagesz=1GB hugepagesz=1G hugepages=8 isolcpus=2-7"
    16. [customizations.services]
    17. disabled = ["NetworkManager-wait-online", "sshd"]
    18. EOF
  3. Push the blueprint file to the image builder tool by running the following command:

    1. # composer-cli blueprints push dpdk-vm.toml
  4. Generate the system image by specifying the blueprint name and output file format. The Universally Unique Identifier (UUID) of the image is displayed when you start the compose process.

    1. # composer-cli compose start dpdk_image qcow2
  5. Wait for the compose process to complete. The compose status must show FINISHED before you can continue to the next step.

    1. # composer-cli compose status
  6. Enter the following command to download the qcow2 image file by specifying its UUID:

    1. # composer-cli compose image <UUID>
  7. Create the customization scripts by running the following commands:

    1. $ cat <<EOF >customize-vm
    2. echo isolated_cores=2-7 > /etc/tuned/cpu-partitioning-variables.conf
    3. tuned-adm profile cpu-partitioning
    4. echo "options vfio enable_unsafe_noiommu_mode=1" > /etc/modprobe.d/vfio-noiommu.conf
    5. EOF
    1. $ cat <<EOF >first-boot
    2. driverctl set-override 0000:06:00.0 vfio-pci
    3. driverctl set-override 0000:07:00.0 vfio-pci
    4. mkdir /mnt/huge
    5. mount /mnt/huge --source nodev -t hugetlbfs -o pagesize=1GB
    6. EOF
  8. Use the virt-customize tool to customize the image generated by the image builder tool:

    1. $ virt-customize -a <UUID>.qcow2 --run=customize-vm --firstboot=first-boot --selinux-relabel
  9. To create a Dockerfile that contains all the commands to build the container disk image, enter the following command:

    1. $ cat << EOF > Dockerfile
    2. FROM scratch
    3. COPY <uuid>-disk.qcow2 /disk/
    4. EOF

    where:

    <uuid>-disk.qcow2

    Specifies the name of the custom image in qcow2 format.

  10. Build and tag the container by running the following command:

    1. $ podman build . -t dpdk-rhel:latest
  11. Push the container disk image to a registry that is accessible from your cluster by running the following command:

    1. $ podman push dpdk-rhel:latest
  12. Provide a link to the container disk image in the spec.param.vmContainerDiskImage attribute in the DPDK checkup config map.

Additional resources