Running cluster checkups

OKD Virtualization includes predefined checkups that can be used for cluster maintenance and troubleshooting.

The OKD cluster checkup framework is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

About the OKD cluster checkup framework

A checkup is an automated test workload that allows you to verify if a specific cluster functionality works as expected. The cluster checkup framework uses native Kubernetes resources to configure and execute the checkup.

By using predefined checkups, cluster administrators and developers can improve cluster maintainability, troubleshoot unexpected behavior, minimize errors, and save time. They can also review the results of the checkup and share them with experts for further analysis. Vendors can write and publish checkups for features or services that they provide and verify that their customer environments are configured correctly.

Running a predefined checkup in an existing namespace involves setting up a service account for the checkup, creating the Role and RoleBinding objects for the service account, enabling permissions for the checkup, and creating the input config map and the checkup job. You can run a checkup multiple times.

You must always:

  • Verify that the checkup image is from a trustworthy source before applying it.

  • Review the checkup permissions before creating the Role and RoleBinding objects.

Checking network connectivity and latency for virtual machines on a secondary network

You use a predefined checkup to verify network connectivity and measure latency between two virtual machines (VMs) that are attached to a secondary network interface.

To run a checkup for the first time, follow the steps in the procedure.

If you have previously run a checkup, skip to step 5 of the procedure because the steps to install the framework and enable permissions for the checkup are not required.

Prerequisites

  • You installed the OpenShift CLI (oc).

  • The cluster has at least two worker nodes.

  • The Multus Container Network Interface (CNI) plugin is installed on the cluster.

  • You configured a network attachment definition for a namespace.

Procedure

  1. Create a manifest file that contains the ServiceAccount, Role, and RoleBinding objects with permissions that the checkup requires for cluster access:

    Example role manifest file

    1. ---
    2. apiVersion: v1
    3. kind: ServiceAccount
    4. metadata:
    5. name: vm-latency-checkup-sa
    6. ---
    7. apiVersion: rbac.authorization.k8s.io/v1
    8. kind: Role
    9. metadata:
    10. name: kubevirt-vm-latency-checker
    11. rules:
    12. - apiGroups: ["kubevirt.io"]
    13. resources: ["virtualmachineinstances"]
    14. verbs: ["get", "create", "delete"]
    15. - apiGroups: ["subresources.kubevirt.io"]
    16. resources: ["virtualmachineinstances/console"]
    17. verbs: ["get"]
    18. - apiGroups: ["k8s.cni.cncf.io"]
    19. resources: ["network-attachment-definitions"]
    20. verbs: ["get"]
    21. ---
    22. apiVersion: rbac.authorization.k8s.io/v1
    23. kind: RoleBinding
    24. metadata:
    25. name: kubevirt-vm-latency-checker
    26. subjects:
    27. - kind: ServiceAccount
    28. name: vm-latency-checkup-sa
    29. roleRef:
    30. kind: Role
    31. name: kubevirt-vm-latency-checker
    32. apiGroup: rbac.authorization.k8s.io
    33. ---
    34. apiVersion: rbac.authorization.k8s.io/v1
    35. kind: Role
    36. metadata:
    37. name: kiagnose-configmap-access
    38. rules:
    39. - apiGroups: [ "" ]
    40. resources: [ "configmaps" ]
    41. verbs: ["get", "update"]
    42. ---
    43. apiVersion: rbac.authorization.k8s.io/v1
    44. kind: RoleBinding
    45. metadata:
    46. name: kiagnose-configmap-access
    47. subjects:
    48. - kind: ServiceAccount
    49. name: vm-latency-checkup-sa
    50. roleRef:
    51. kind: Role
    52. name: kiagnose-configmap-access
    53. apiGroup: rbac.authorization.k8s.io
  2. Apply the checkup roles manifest:

    1. $ oc apply -n <target_namespace> -f <latency_roles>.yaml (1)
    1<target_namespace> is the namespace where the checkup is to be run. This must be an existing namespace where the NetworkAttachmentDefinition object resides.
  3. Create a ConfigMap manifest that contains the input parameters for the checkup. The config map provides the input for the framework to run the checkup and also stores the results of the checkup.

    Example input config map

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: kubevirt-vm-latency-checkup-config
    5. data:
    6. spec.timeout: 5m
    7. spec.param.network_attachment_definition_namespace: <target_namespace>
    8. spec.param.network_attachment_definition_name: "blue-network" (1)
    9. spec.param.max_desired_latency_milliseconds: "10" (2)
    10. spec.param.sample_duration_seconds: "5" (3)
    11. spec.param.source_node: "worker1" (4)
    12. spec.param.target_node: "worker2" (5)
    1The name of the NetworkAttachmentDefinition object.
    2Optional: The maximum desired latency, in milliseconds, between the virtual machines. If the measured latency exceeds this value, the checkup fails.
    3Optional: The duration of the latency check, in seconds.
    4Optional: When specified, latency is measured from this node to the target node. If the source node is specified, the spec.param.target_node field cannot be empty.
    5Optional: When specified, latency is measured from the source node to this node.
  4. Apply the config map manifest in the target namespace:

    1. $ oc apply -n <target_namespace> -f <latency_config_map>.yaml
  5. Create a Job object to run the checkup:

    Example job manifest

    1. apiVersion: batch/v1
    2. kind: Job
    3. metadata:
    4. name: kubevirt-vm-latency-checkup
    5. spec:
    6. backoffLimit: 0
    7. template:
    8. spec:
    9. serviceAccountName: vm-latency-checkup-sa
    10. restartPolicy: Never
    11. containers:
    12. - name: vm-latency-checkup
    13. image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup:v4.12.0
    14. securityContext:
    15. allowPrivilegeEscalation: false
    16. capabilities:
    17. drop: ["ALL"]
    18. runAsNonRoot: true
    19. seccompProfile:
    20. type: "RuntimeDefault"
    21. env:
    22. - name: CONFIGMAP_NAMESPACE
    23. value: <target_namespace>
    24. - name: CONFIGMAP_NAME
    25. value: kubevirt-vm-latency-checkup-config
  6. Apply the Job manifest. The checkup uses the ping utility to verify connectivity and measure latency.

    1. $ oc apply -n <target_namespace> -f <latency_job>.yaml
  7. Wait for the job to complete:

    1. $ oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6m
  8. Review the results of the latency checkup by running the following command. If the maximum measured latency is greater than the value of the spec.param.max_desired_latency_milliseconds attribute, the checkup fails and returns an error.

    1. $ oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yaml

    Example output config map (success)

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: kubevirt-vm-latency-checkup-config
    5. namespace: <target_namespace>
    6. data:
    7. spec.timeout: 5m
    8. spec.param.network_attachment_definition_namespace: <target_namespace>
    9. spec.param.network_attachment_definition_name: "blue-network"
    10. spec.param.max_desired_latency_milliseconds: "10"
    11. spec.param.sample_duration_seconds: "5"
    12. spec.param.source_node: "worker1"
    13. spec.param.target_node: "worker2"
    14. status.succeeded: "true"
    15. status.failureReason: ""
    16. status.completionTimestamp: "2022-01-01T09:00:00Z"
    17. status.startTimestamp: "2022-01-01T09:00:07Z"
    18. status.result.avgLatencyNanoSec: "177000"
    19. status.result.maxLatencyNanoSec: "244000" (1)
    20. status.result.measurementDurationSec: "5"
    21. status.result.minLatencyNanoSec: "135000"
    22. status.result.sourceNode: "worker1"
    23. status.result.targetNode: "worker2"
    1The maximum measured latency in nanoseconds.
  9. Optional: To view the detailed job log in case of checkup failure, use the following command:

    1. $ oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>
  10. Delete the job and config map resources that you previously created by running the following commands:

    1. $ oc delete job -n <target_namespace> kubevirt-vm-latency-checkup
    1. $ oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-config
  11. Optional: If you do not plan to run another checkup, delete the checkup role and framework manifest files.

    1. $ oc delete -f <file_name>.yaml

Additional resources