OKD Virtualization runbooks

You can use the procedures in these runbooks to diagnose and resolve issues that trigger OKD Virtualization alerts.

OKD Virtualization alerts are displayed on the Virtualization > Overview page.

CDIDataImportCronOutdated

Meaning

This alert fires when DataImportCron cannot poll or import the latest disk image versions.

DataImportCron polls disk images, checking for the latest versions, and imports the images as persistent volume claims (PVCs). This process ensures that PVCs are updated to the latest version so that they can be used as reliable clone sources or golden images for virtual machines (VMs).

For golden images, latest refers to the latest operating system of the distribution. For other disk images, latest refers to the latest hash of the image that is available.

Impact

VMs might be created from outdated disk images.

VMs might fail to start because no source PVC is available for cloning.

Diagnosis

  1. Check the cluster for a default storage class:

    1. $ oc get sc

    The output displays the storage classes with (default) beside the name of the default storage class. You must set a default storage class, either on the cluster or in the DataImportCron specification, in order for the DataImportCron to poll and import golden images. If no storage class is defined, the DataVolume controller fails to create PVCs and the following event is displayed: DataVolume.storage spec is missing accessMode and no storageClass to choose profile.

  2. Obtain the DataImportCron namespace and name:

    1. $ oc get dataimportcron -A -o json | jq -r '.items[] | \
    2. select(.status.conditions[] | select(.type == "UpToDate" and \
    3. .status == "False")) | .metadata.namespace + "/" + .metadata.name'
  3. If a default storage class is not defined on the cluster, check the DataImportCron specification for a default storage class:

    1. $ oc get dataimportcron <dataimportcron> -o yaml | \
    2. grep -B 5 storageClassName

    Example output

    1. url: docker://.../cdi-func-test-tinycore
    2. storage:
    3. resources:
    4. requests:
    5. storage: 5Gi
    6. storageClassName: rook-ceph-block
  4. Obtain the name of the DataVolume associated with the DataImportCron object:

    1. $ oc -n <namespace> get dataimportcron <dataimportcron> -o json | \
    2. jq .status.lastImportedPVC.name
  5. Check the DataVolume log for error messages:

    1. $ oc -n <namespace> get dv <datavolume> -o yaml
  6. Set the CDI_NAMESPACE environment variable:

    1. $ export CDI_NAMESPACE="$(oc get deployment -A | \
    2. grep cdi-operator | awk '{print $1}')"
  7. Check the cdi-deployment log for error messages:

    1. $ oc logs -n $CDI_NAMESPACE deployment/cdi-deployment

Mitigation

  1. Set a default storage class, either on the cluster or in the DataImportCron specification, to poll and import golden images. The updated Containerized Data Importer (CDI) will resolve the issue within a few seconds.

  2. If the issue does not resolve itself, delete the data volumes associated with the affected DataImportCron objects. The CDI will recreate the data volumes with the default storage class.

  3. If your cluster is installed in a restricted network environment, disable the enableCommonBootImageImport feature gate in order to opt out of automatic updates:

    1. $ oc patch hco kubevirt-hyperconverged -n $CDI_NAMESPACE --type json \
    2. -p '[{"op": "replace", "path": \
    3. "/spec/featureGates/enableCommonBootImageImport", "value": false}]'

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

CDIDataVolumeUnusualRestartCount

Meaning

This alert fires when a DataVolume object restarts more than three times.

Impact

Data volumes are responsible for importing and creating a virtual machine disk on a persistent volume claim. If a data volume restarts more than three times, these operations are unlikely to succeed. You must diagnose and resolve the issue.

Diagnosis

  1. Obtain the name and namespace of the data volume:

    1. $ oc get dv -A -o json | jq -r '.items[] | \
    2. select(.status.restartCount>3)' | jq '.metadata.name, .metadata.namespace'
  2. Check the status of the pods associated with the data volume:

    1. $ oc get pods -n <namespace> -o json | jq -r '.items[] | \
    2. select(.metadata.ownerReferences[] | \
    3. select(.name=="<dv_name>")).metadata.name'
  3. Obtain the details of the pods:

    1. $ oc -n <namespace> describe pods <pod>
  4. Check the pod logs for error messages:

    1. $ oc -n <namespace> describe logs <pod>

Mitigation

Delete the data volume, resolve the issue, and create a new data volume.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

CDINotReady

Meaning

This alert fires when the Containerized Data Importer (CDI) is in a degraded state:

  • Not progressing

  • Not available to use

Impact

CDI is not usable, so users cannot build virtual machine disks on persistent volume claims (PVCs) using CDI’s data volumes. CDI components are not ready and they stopped progressing towards a ready state.

Diagnosis

  1. Set the CDI_NAMESPACE environment variable:

    1. $ export CDI_NAMESPACE="$(oc get deployment -A | \
    2. grep cdi-operator | awk '{print $1}')"
  2. Check the CDI deployment for components that are not ready:

    1. $ oc -n $CDI_NAMESPACE get deploy -l cdi.kubevirt.io
  3. Check the details of the failing pod:

    1. $ oc -n $CDI_NAMESPACE describe pods <pod>
  4. Check the logs of the failing pod:

    1. $ oc -n $CDI_NAMESPACE logs <pod>

Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

CDIOperatorDown

Meaning

This alert fires when the Containerized Data Importer (CDI) Operator is down. The CDI Operator deploys and manages the CDI infrastructure components, such as data volume and persistent volume claim (PVC) controllers. These controllers help users build virtual machine disks on PVCs.

Impact

The CDI components might fail to deploy or to stay in a required state. The CDI installation might not function correctly.

Diagnosis

  1. Set the CDI_NAMESPACE environment variable:

    1. $ export CDI_NAMESPACE="$(oc get deployment -A | grep cdi-operator | \
    2. awk '{print $1}')"
  2. Check whether the cdi-operator pod is currently running:

    1. $ oc -n $CDI_NAMESPACE get pods -l name=cdi-operator
  3. Obtain the details of the cdi-operator pod:

    1. $ oc -n $CDI_NAMESPACE describe pods -l name=cdi-operator
  4. Check the log of the cdi-operator pod for errors:

    1. $ oc -n $CDI_NAMESPACE logs -l name=cdi-operator

Mitigation

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

CDIStorageProfilesIncomplete

Meaning

This alert fires when a Containerized Data Importer (CDI) storage profile is incomplete.

If a storage profile is incomplete, the CDI cannot infer persistent volume claim (PVC) fields, such as volumeMode and accessModes, which are required to create a virtual machine (VM) disk.

Impact

The CDI cannot create a VM disk on the PVC.

Diagnosis

  • Identify the incomplete storage profile:

    1. $ oc get storageprofile <storage_class>

Mitigation

  • Add the missing storage profile information as in the following example:

    1. $ oc patch storageprofile local --type=merge -p '{"spec": \
    2. {"claimPropertySets": [{"accessModes": ["ReadWriteOnce"], \
    3. "volumeMode": "Filesystem"}]}}'

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

CnaoDown

Meaning

This alert fires when the Cluster Network Addons Operator (CNAO) is down. The CNAO deploys additional networking components on top of the cluster.

Impact

If the CNAO is not running, the cluster cannot reconcile changes to virtual machine components. As a result, the changes might fail to take effect.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get deployment -A | \
    2. grep cluster-network-addons-operator | awk '{print $1}')"
  2. Check the status of the cluster-network-addons-operator pod:

    1. $ oc -n $NAMESPACE get pods -l name=cluster-network-addons-operator
  3. Check the cluster-network-addons-operator logs for error messages:

    1. $ oc -n $NAMESPACE logs -l name=cluster-network-addons-operator
  4. Obtain the details of the cluster-network-addons-operator pods:

    1. $ oc -n $NAMESPACE describe pods -l name=cluster-network-addons-operator

Mitigation

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

HPPNotReady

Meaning

This alert fires when a hostpath provisioner (HPP) installation is in a degraded state.

The HPP dynamically provisions hostpath volumes to provide storage for persistent volume claims (PVCs).

Impact

HPP is not usable. Its components are not ready and they are not progressing towards a ready state.

Diagnosis

  1. Set the HPP_NAMESPACE environment variable:

    1. $ export HPP_NAMESPACE="$(oc get deployment -A | \
    2. grep hostpath-provisioner-operator | awk '{print $1}')"
  2. Check for HPP components that are currently not ready:

    1. $ oc -n $HPP_NAMESPACE get all -l k8s-app=hostpath-provisioner
  3. Obtain the details of the failing pod:

    1. $ oc -n $HPP_NAMESPACE describe pods <pod>
  4. Check the logs of the failing pod:

    1. $ oc -n $HPP_NAMESPACE logs <pod>

Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

HPPOperatorDown

Meaning

This alert fires when the hostpath provisioner (HPP) Operator is down.

The HPP Operator deploys and manages the HPP infrastructure components, such as the daemon set that provisions hostpath volumes.

Impact

The HPP components might fail to deploy or to remain in the required state. As a result, the HPP installation might not work correctly in the cluster.

Diagnosis

  1. Configure the HPP_NAMESPACE environment variable:

    1. $ HPP_NAMESPACE="$(oc get deployment -A | grep \
    2. hostpath-provisioner-operator | awk '{print $1}')"
  2. Check whether the hostpath-provisioner-operator pod is currently running:

    1. $ oc -n $HPP_NAMESPACE get pods -l name=hostpath-provisioner-operator
  3. Obtain the details of the hostpath-provisioner-operator pod:

    1. $ oc -n $HPP_NAMESPACE describe pods -l name=hostpath-provisioner-operator
  4. Check the log of the hostpath-provisioner-operator pod for errors:

    1. $ oc -n $HPP_NAMESPACE logs -l name=hostpath-provisioner-operator

Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

HPPSharingPoolPathWithOS

Meaning

This alert fires when the hostpath provisioner (HPP) shares a file system with other critical components, such as kubelet or the operating system (OS).

HPP dynamically provisions hostpath volumes to provide storage for persistent volume claims (PVCs).

Impact

A shared hostpath pool puts pressure on the node’s disks. The node might have degraded performance and stability.

Diagnosis

  1. Configure the HPP_NAMESPACE environment variable:

    1. $ export HPP_NAMESPACE="$(oc get deployment -A | \
    2. grep hostpath-provisioner-operator | awk '{print $1}')"
  2. Obtain the status of the hostpath-provisioner-csi daemon set pods:

    1. $ oc -n $HPP_NAMESPACE get pods | grep hostpath-provisioner-csi
  3. Check the hostpath-provisioner-csi logs to identify the shared pool and path:

    1. $ oc -n $HPP_NAMESPACE logs <csi_daemonset> -c hostpath-provisioner

    Example output

    1. I0208 15:21:03.769731 1 utils.go:221] pool (<legacy, csi-data-dir>/csi),
    2. shares path with OS which can lead to node disk pressure

Mitigation

Using the data obtained in the Diagnosis section, try to prevent the pool path from being shared with the OS. The specific steps vary based on the node and other circumstances.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

KubeMacPoolDown

Meaning

KubeMacPool is down. KubeMacPool is responsible for allocating MAC addresses and preventing MAC address conflicts.

Impact

If KubeMacPool is down, VirtualMachine objects cannot be created.

Diagnosis

  1. Set the KMP_NAMESPACE environment variable:

    1. $ export KMP_NAMESPACE="$(oc get pod -A --no-headers -l \
    2. control-plane=mac-controller-manager | awk '{print $1}')"
  2. Set the KMP_NAME environment variable:

    1. $ export KMP_NAME="$(oc get pod -A --no-headers -l \
    2. control-plane=mac-controller-manager | awk '{print $2}')"
  3. Obtain the KubeMacPool-manager pod details:

    1. $ oc describe pod -n $KMP_NAMESPACE $KMP_NAME
  4. Check the KubeMacPool-manager logs for error messages:

    1. $ oc logs -n $KMP_NAMESPACE $KMP_NAME

Mitigation

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

KubeMacPoolDuplicateMacsFound

Meaning

This alert fires when KubeMacPool detects duplicate MAC addresses.

KubeMacPool is responsible for allocating MAC addresses and preventing MAC address conflicts. When KubeMacPool starts, it scans the cluster for the MAC addresses of virtual machines (VMs) in managed namespaces.

Impact

Duplicate MAC addresses on the same LAN might cause network issues.

Diagnosis

  1. Obtain the namespace and the name of the kubemacpool-mac-controller pod:

    1. $ oc get pod -A -l control-plane=mac-controller-manager --no-headers \
    2. -o custom-columns=":metadata.namespace,:metadata.name"
  2. Obtain the duplicate MAC addresses from the kubemacpool-mac-controller logs:

    1. $ oc logs -n <namespace> <kubemacpool_mac_controller> | \
    2. grep "already allocated"

    Example output

    1. mac address 02:00:ff:ff:ff:ff already allocated to
    2. vm/kubemacpool-test/testvm, br1,
    3. conflict with: vm/kubemacpool-test/testvm2, br1

Mitigation

  1. Update the VMs to remove the duplicate MAC addresses.

  2. Restart the kubemacpool-mac-controller pod:

    1. $ oc delete pod -n <namespace> <kubemacpool_mac_controller>

KubeVirtComponentExceedsRequestedCPU

Meaning

This alert fires when a component’s CPU usage exceeds the requested limit.

Impact

Usage of CPU resources is not optimal and the node might be overloaded.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. Check the component’s CPU request limit:

    1. $ oc -n $NAMESPACE get deployment <component> -o yaml | grep requests: -A 2
  3. Check the actual CPU usage by using a PromQL query:

    1. node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate
    2. {namespace="$NAMESPACE",container="<component>"}

See the Prometheus documentation for more information.

Mitigation

Update the CPU request limit in the HCO custom resource.

KubeVirtComponentExceedsRequestedMemory

Meaning

This alert fires when a component’s memory usage exceeds the requested limit.

Impact

Usage of memory resources is not optimal and the node might be overloaded.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. Check the component’s memory request limit:

    1. $ oc -n $NAMESPACE get deployment <component> -o yaml | \
    2. grep requests: -A 2
  3. Check the actual memory usage by using a PromQL query:

    1. container_memory_usage_bytes{namespace="$NAMESPACE",container="<component>"}

See the Prometheus documentation for more information.

Mitigation

Update the memory request limit in the HCO custom resource.

KubevirtHyperconvergedClusterOperatorCRModification

Meaning

This alert fires when an operand of the HyperConverged Cluster Operator (HCO) is changed by someone or something other than HCO.

HCO configures OKD Virtualization and its supporting operators in an opinionated way and overwrites its operands when there is an unexpected change to them. Users must not modify the operands directly. The HyperConverged custom resource is the source of truth for the configuration.

Impact

Changing the operands manually causes the cluster configuration to fluctuate and might lead to instability.

Diagnosis

  • Check the component_name value in the alert details to determine the operand kind (kubevirt) and the operand name (kubevirt-kubevirt-hyperconverged) that are being changed:

    1. Labels
    2. alertname=KubevirtHyperconvergedClusterOperatorCRModification
    3. component_name=kubevirt/kubevirt-kubevirt-hyperconverged
    4. severity=warning

Mitigation

Do not change the HCO operands directly. Use HyperConverged objects to configure the cluster.

The alert resolves itself after 10 minutes if the operands are not changed manually.

KubevirtHyperconvergedClusterOperatorInstallationNotCompletedAlert

Meaning

This alert fires when the HyperConverged Cluster Operator (HCO) runs for more than an hour without a HyperConverged custom resource (CR).

This alert has the following causes:

  • During the installation process, you installed the HCO but you did not create the HyperConverged CR.

  • During the uninstall process, you removed the HyperConverged CR before uninstalling the HCO and the HCO is still running.

Mitigation

The mitigation depends on whether you are installing or uninstalling the HCO:

  • Complete the installation by creating a HyperConverged CR with its default values:

    1. $ cat <<EOF | oc apply -f -
    2. apiVersion: operators.coreos.com/v1
    3. kind: OperatorGroup
    4. metadata:
    5. name: hco-operatorgroup
    6. namespace: kubevirt-hyperconverged
    7. spec: {}
    8. EOF
  • Uninstall the HCO. If the uninstall process continues to run, you must resolve that issue in order to cancel the alert.

KubevirtHyperconvergedClusterOperatorUSModification

Meaning

This alert fires when a JSON Patch annotation is used to change an operand of the HyperConverged Cluster Operator (HCO).

HCO configures OKD Virtualization and its supporting operators in an opinionated way and overwrites its operands when there is an unexpected change to them. Users must not modify the operands directly.

However, if a change is required and it is not supported by the HCO API, you can force HCO to set a change in an operator by using JSON Patch annotations. These changes are not reverted by HCO during its reconciliation process.

Impact

Incorrect use of JSON Patch annotations might lead to unexpected results or an unstable environment.

Upgrading a system with JSON Patch annotations is dangerous because the structure of the component custom resources might change.

Diagnosis

  • Check the annotation_name in the alert details to identify the JSON Patch annotation:

    1. Labels
    2. alertname=KubevirtHyperconvergedClusterOperatorUSModification
    3. annotation_name=kubevirt.kubevirt.io/jsonpatch
    4. severity=info

Mitigation

It is best to use the HCO API to change an operand. However, if the change can only be done with a JSON Patch annotation, proceed with caution.

Remove JSON Patch annotations before upgrade to avoid potential issues.

KubevirtVmHighMemoryUsage

Meaning

This alert fires when a container hosting a virtual machine (VM) has less than 20 MB free memory.

Impact

The virtual machine running inside the container is terminated by the runtime if the container’s memory limit is exceeded.

Diagnosis

  1. Obtain the virt-launcher pod details:

    1. $ oc get pod <virt-launcher> -o yaml
  2. Identify compute container processes with high memory usage in the virt-launcher pod:

    1. $ oc exec -it <virt-launcher> -c compute -- top

Mitigation

  • Increase the memory limit in the VirtualMachine specification as in the following example:

    1. spec:
    2. running: false
    3. template:
    4. metadata:
    5. labels:
    6. kubevirt.io/vm: vm-name
    7. spec:
    8. domain:
    9. resources:
    10. limits:
    11. memory: 200Mi
    12. requests:
    13. memory: 128Mi

KubeVirtVMIExcessiveMigrations

Meaning

This alert fires when a virtual machine instance (VMI) live migrates more than 12 times over a period of 24 hours.

This migration rate is abnormally high, even during an upgrade. This alert might indicate a problem in the cluster infrastructure, such as network disruptions or insufficient resources.

Impact

A virtual machine (VM) that migrates too frequently might experience degraded performance because memory page faults occur during the transition.

Diagnosis

  1. Verify that the worker node has sufficient resources:

    1. $ oc get nodes -l node-role.kubernetes.io/worker= -o json | \
    2. jq .items[].status.allocatable

    Example output

    1. {
    2. "cpu": "3500m",
    3. "devices.kubevirt.io/kvm": "1k",
    4. "devices.kubevirt.io/sev": "0",
    5. "devices.kubevirt.io/tun": "1k",
    6. "devices.kubevirt.io/vhost-net": "1k",
    7. "ephemeral-storage": "38161122446",
    8. "hugepages-1Gi": "0",
    9. "hugepages-2Mi": "0",
    10. "memory": "7000128Ki",
    11. "pods": "250"
    12. }
  2. Check the status of the worker node:

    1. $ oc get nodes -l node-role.kubernetes.io/worker= -o json | \
    2. jq .items[].status.conditions

    Example output

    1. {
    2. "lastHeartbeatTime": "2022-05-26T07:36:01Z",
    3. "lastTransitionTime": "2022-05-23T08:12:02Z",
    4. "message": "kubelet has sufficient memory available",
    5. "reason": "KubeletHasSufficientMemory",
    6. "status": "False",
    7. "type": "MemoryPressure"
    8. },
    9. {
    10. "lastHeartbeatTime": "2022-05-26T07:36:01Z",
    11. "lastTransitionTime": "2022-05-23T08:12:02Z",
    12. "message": "kubelet has no disk pressure",
    13. "reason": "KubeletHasNoDiskPressure",
    14. "status": "False",
    15. "type": "DiskPressure"
    16. },
    17. {
    18. "lastHeartbeatTime": "2022-05-26T07:36:01Z",
    19. "lastTransitionTime": "2022-05-23T08:12:02Z",
    20. "message": "kubelet has sufficient PID available",
    21. "reason": "KubeletHasSufficientPID",
    22. "status": "False",
    23. "type": "PIDPressure"
    24. },
    25. {
    26. "lastHeartbeatTime": "2022-05-26T07:36:01Z",
    27. "lastTransitionTime": "2022-05-23T08:24:15Z",
    28. "message": "kubelet is posting ready status",
    29. "reason": "KubeletReady",
    30. "status": "True",
    31. "type": "Ready"
    32. }
  3. Log in to the worker node and verify that the kubelet service is running:

    1. $ systemctl status kubelet
  4. Check the kubelet journal log for error messages:

    1. $ journalctl -r -u kubelet

Mitigation

Ensure that the worker nodes have sufficient resources (CPU, memory, disk) to run VM workloads without interruption.

If the problem persists, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

KubeVirtVMStuckInErrorState

Meaning

This alert fires when a virtual machine (VM) is in an error state for more than 5 minutes.

Error states:

  • CrashLoopBackOff

  • Unknown

  • Unschedulable

  • ErrImagePull

  • ImagePullBackOff

  • PvcNotFound

  • DataVolumeError

This alert might indicate an issue with the VM configuration, such as a missing persistent volume claim, or a problem in the cluster infrastructure, such as network disruptions or insufficient node resources.

Impact

There is no immediate impact. However, if this alert persists, you must investigate the root cause and resolve the issue.

Diagnosis

  1. Check the virtual machine instance (VMI) details:

    1. $ oc describe vmi <vmi> -n <namespace>

    Example output

    1. Name: testvmi-hxghp
    2. Namespace: kubevirt-test-default1
    3. Labels: name=testvmi-hxghp
    4. Annotations: kubevirt.io/latest-observed-api-version: v1
    5. kubevirt.io/storage-observed-api-version: v1alpha3
    6. API Version: kubevirt.io/v1
    7. Kind: VirtualMachineInstance
    8. ...
    9. Spec:
    10. Domain:
    11. ...
    12. Resources:
    13. Requests:
    14. Cpu: 5000000Gi
    15. Memory: 5130000240Mi
    16. ...
    17. Status:
    18. ...
    19. Conditions:
    20. Last Probe Time: 2022-10-03T11:11:07Z
    21. Last Transition Time: 2022-10-03T11:11:07Z
    22. Message: Guest VM is not reported as running
    23. Reason: GuestNotRunning
    24. Status: False
    25. Type: Ready
    26. Last Probe Time: <nil>
    27. Last Transition Time: 2022-10-03T11:11:07Z
    28. Message: 0/2 nodes are available: 2 Insufficient cpu, 2
    29. Insufficient memory.
    30. Reason: Unschedulable
    31. Status: False
    32. Type: PodScheduled
    33. Guest OS Info:
    34. Phase: Scheduling
    35. Phase Transition Timestamps:
    36. Phase: Pending
    37. Phase Transition Timestamp: 2022-10-03T11:11:07Z
    38. Phase: Scheduling
    39. Phase Transition Timestamp: 2022-10-03T11:11:07Z
    40. Qos Class: Burstable
    41. Runtime User: 0
    42. Virtual Machine Revision Name: revision-start-vm-3503e2dc-27c0-46ef-9167-7ae2e7d93e6e-1
    43. Events:
    44. Type Reason Age From Message
    45. ---- ------ ---- ---- -------
    46. Normal SuccessfulCreate 27s virtualmachine-controller Created virtual
    47. machine pod virt-launcher-testvmi-hxghp-xh9qn
  2. Check the node resources:

    1. $ oc get nodes -l node-role.kubernetes.io/worker= -o json | jq '.items | \
    2. .[].status.allocatable'

    Example output

    1. {
    2. "cpu": "5",
    3. "devices.kubevirt.io/kvm": "1k",
    4. "devices.kubevirt.io/sev": "0",
    5. "devices.kubevirt.io/tun": "1k",
    6. "devices.kubevirt.io/vhost-net": "1k",
    7. "ephemeral-storage": "33812468066",
    8. "hugepages-1Gi": "0",
    9. "hugepages-2Mi": "128Mi",
    10. "memory": "3783496Ki",
    11. "pods": "110"
    12. }
  3. Check the node for error conditions:

    1. $ oc get nodes -l node-role.kubernetes.io/worker= -o json | jq '.items | \
    2. .[].status.conditions'

    Example output

    1. [
    2. {
    3. "lastHeartbeatTime": "2022-10-03T11:13:34Z",
    4. "lastTransitionTime": "2022-10-03T10:14:20Z",
    5. "message": "kubelet has sufficient memory available",
    6. "reason": "KubeletHasSufficientMemory",
    7. "status": "False",
    8. "type": "MemoryPressure"
    9. },
    10. {
    11. "lastHeartbeatTime": "2022-10-03T11:13:34Z",
    12. "lastTransitionTime": "2022-10-03T10:14:20Z",
    13. "message": "kubelet has no disk pressure",
    14. "reason": "KubeletHasNoDiskPressure",
    15. "status": "False",
    16. "type": "DiskPressure"
    17. },
    18. {
    19. "lastHeartbeatTime": "2022-10-03T11:13:34Z",
    20. "lastTransitionTime": "2022-10-03T10:14:20Z",
    21. "message": "kubelet has sufficient PID available",
    22. "reason": "KubeletHasSufficientPID",
    23. "status": "False",
    24. "type": "PIDPressure"
    25. },
    26. {
    27. "lastHeartbeatTime": "2022-10-03T11:13:34Z",
    28. "lastTransitionTime": "2022-10-03T10:14:30Z",
    29. "message": "kubelet is posting ready status",
    30. "reason": "KubeletReady",
    31. "status": "True",
    32. "type": "Ready"
    33. }
    34. ]

Mitigation

Try to identify and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

KubeVirtVMStuckInMigratingState

Meaning

This alert fires when a virtual machine (VM) is in a migrating state for more than 5 minutes.

This alert might indicate a problem in the cluster infrastructure, such as network disruptions or insufficient node resources.

Impact

There is no immediate impact. However, if this alert persists, you must investigate the root cause and resolve the issue.

Diagnosis

  1. Check the node resources:

    1. $ oc get nodes -l node-role.kubernetes.io/worker= -o json | jq '.items | \
    2. .[].status.allocatable'

    Example output

    1. {
    2. "cpu": "5",
    3. "devices.kubevirt.io/kvm": "1k",
    4. "devices.kubevirt.io/sev": "0",
    5. "devices.kubevirt.io/tun": "1k",
    6. "devices.kubevirt.io/vhost-net": "1k",
    7. "ephemeral-storage": "33812468066",
    8. "hugepages-1Gi": "0",
    9. "hugepages-2Mi": "128Mi",
    10. "memory": "3783496Ki",
    11. "pods": "110"
    12. }
  2. Check the node status conditions:

    1. $ oc get nodes -l node-role.kubernetes.io/worker= -o json | jq '.items | \
    2. .[].status.conditions'

    Example output

    1. [
    2. {
    3. "lastHeartbeatTime": "2022-10-03T11:13:34Z",
    4. "lastTransitionTime": "2022-10-03T10:14:20Z",
    5. "message": "kubelet has sufficient memory available",
    6. "reason": "KubeletHasSufficientMemory",
    7. "status": "False",
    8. "type": "MemoryPressure"
    9. },
    10. {
    11. "lastHeartbeatTime": "2022-10-03T11:13:34Z",
    12. "lastTransitionTime": "2022-10-03T10:14:20Z",
    13. "message": "kubelet has no disk pressure",
    14. "reason": "KubeletHasNoDiskPressure",
    15. "status": "False",
    16. "type": "DiskPressure"
    17. },
    18. {
    19. "lastHeartbeatTime": "2022-10-03T11:13:34Z",
    20. "lastTransitionTime": "2022-10-03T10:14:20Z",
    21. "message": "kubelet has sufficient PID available",
    22. "reason": "KubeletHasSufficientPID",
    23. "status": "False",
    24. "type": "PIDPressure"
    25. },
    26. {
    27. "lastHeartbeatTime": "2022-10-03T11:13:34Z",
    28. "lastTransitionTime": "2022-10-03T10:14:30Z",
    29. "message": "kubelet is posting ready status",
    30. "reason": "KubeletReady",
    31. "status": "True",
    32. "type": "Ready"
    33. }
    34. ]

Mitigation

Check the migration configuration of the virtual machine to ensure that it is appropriate for the workload.

You set a cluster-wide migration configuration by editing the MigrationConfiguration stanza of the KubeVirt custom resource.

You set a migration configuration for a specific scope by creating a migration policy.

You can determine whether a VM is bound to a migration policy by viewing its vm.Status.MigrationState.MigrationPolicyName parameter.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

KubeVirtVMStuckInStartingState

Meaning

This alert fires when a virtual machine (VM) is in a starting state for more than 5 minutes.

This alert might indicate an issue in the VM configuration, such as a misconfigured priority class or a missing network device.

Impact

There is no immediate impact. However, if this alert persists, you must investigate the root cause and resolve the issue.

Diagnosis

  • Check the virtual machine instance (VMI) details for error conditions:

    1. $ oc describe vmi <vmi> -n <namespace>

    Example output

    1. Name: testvmi-ldgrw
    2. Namespace: kubevirt-test-default1
    3. Labels: name=testvmi-ldgrw
    4. Annotations: kubevirt.io/latest-observed-api-version: v1
    5. kubevirt.io/storage-observed-api-version: v1alpha3
    6. API Version: kubevirt.io/v1
    7. Kind: VirtualMachineInstance
    8. ...
    9. Spec:
    10. ...
    11. Networks:
    12. Name: default
    13. Pod:
    14. Priority Class Name: non-preemtible
    15. Termination Grace Period Seconds: 0
    16. Status:
    17. Conditions:
    18. Last Probe Time: 2022-10-03T11:08:30Z
    19. Last Transition Time: 2022-10-03T11:08:30Z
    20. Message: virt-launcher pod has not yet been scheduled
    21. Reason: PodNotExists
    22. Status: False
    23. Type: Ready
    24. Last Probe Time: <nil>
    25. Last Transition Time: 2022-10-03T11:08:30Z
    26. Message: failed to create virtual machine pod: pods
    27. "virt-launcher-testvmi-ldgrw-" is forbidden: no PriorityClass with name
    28. non-preemtible was found
    29. Reason: FailedCreate
    30. Status: False
    31. Type: Synchronized
    32. Guest OS Info:
    33. Phase: Pending
    34. Phase Transition Timestamps:
    35. Phase: Pending
    36. Phase Transition Timestamp: 2022-10-03T11:08:30Z
    37. Runtime User: 0
    38. Virtual Machine Revision Name:
    39. revision-start-vm-6f01a94b-3260-4c5a-bbe5-dc98d13e6bea-1
    40. Events:
    41. Type Reason Age From Message
    42. ---- ------ ---- ---- -------
    43. Warning FailedCreate 8s (x13 over 28s) virtualmachine-controller Error
    44. creating pod: pods "virt-launcher-testvmi-ldgrw-" is forbidden: no
    45. PriorityClass with name non-preemtible was found

Mitigation

Ensure that the VM is configured correctly and has the required resources.

A Pending state indicates that the VM has not yet been scheduled. Check the following possible causes:

  • The virt-launcher pod is not scheduled.

  • Topology hints for the VMI are not up to date.

  • Data volume is not provisioned or ready.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

LowKVMNodesCount

Meaning

This alert fires when fewer than two nodes in the cluster have KVM resources.

Impact

The cluster must have at least two nodes with KVM resources for live migration.

Virtual machines cannot be scheduled or run if no nodes have KVM resources.

Diagnosis

  • Identify the nodes with KVM resources:

    1. $ oc get nodes -o jsonpath='{.items[*].status.allocatable}' | \
    2. grep devices.kubevirt.io/kvm

Mitigation

Install KVM on the nodes without KVM resources.

LowReadyVirtControllersCount

Meaning

This alert fires when one or more virt-controller pods are running, but none of these pods has been in the Ready state for the past 5 minutes.

A virt-controller device monitors the custom resource definitions (CRDs) of a virtual machine instance (VMI) and manages the associated pods. The device creates pods for VMIs and manages their lifecycle. The device is critical for cluster-wide virtualization functionality.

Impact

This alert indicates that a cluster-level failure might occur. Actions related to VM lifecycle management, such as launching a new VMI or shutting down an existing VMI, will fail.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. Verify a virt-controller device is available:

    1. $ oc get deployment -n $NAMESPACE virt-controller \
    2. -o jsonpath='{.status.readyReplicas}'
  3. Check the status of the virt-controller deployment:

    1. $ oc -n $NAMESPACE get deploy virt-controller -o yaml
  4. Obtain the details of the virt-controller deployment to check for status conditions, such as crashing pods or failures to pull images:

    1. $ oc -n $NAMESPACE describe deploy virt-controller
  5. Check if any problems occurred with the nodes. For example, they might be in a NotReady state:

    1. $ oc get nodes

Mitigation

This alert can have multiple causes, including the following:

  • The cluster has insufficient memory.

  • The nodes are down.

  • The API server is overloaded. For example, the scheduler might be under a heavy load and therefore not completely available.

  • There are network issues.

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

LowReadyVirtOperatorsCount

Meaning

This alert fires when one or more virt-operator pods are running, but none of these pods has been in a Ready state for the last 10 minutes.

The virt-operator is the first Operator to start in a cluster. The virt-operator deployment has a default replica of two virt-operator pods.

Its primary responsibilities include the following:

  • Installing, live-updating, and live-upgrading a cluster

  • Monitoring the lifecycle of top-level controllers, such as virt-controller, virt-handler, virt-launcher, and managing their reconciliation

  • Certain cluster-wide tasks, such as certificate rotation and infrastructure management

Impact

A cluster-level failure might occur. Critical cluster-wide management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might become unavailable. Such a state also triggers the NoReadyVirtOperator alert.

The virt-operator is not directly responsible for virtual machines (VMs) in the cluster. Therefore, its temporary unavailability does not significantly affect VM workloads.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. Obtain the name of the virt-operator deployment:

    1. $ oc -n $NAMESPACE get deploy virt-operator -o yaml
  3. Obtain the details of the virt-operator deployment:

    1. $ oc -n $NAMESPACE describe deploy virt-operator
  4. Check for node issues, such as a NotReady state:

    1. $ oc get nodes

Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

LowVirtAPICount

Meaning

This alert fires when only one available virt-api pod is detected during a 60-minute period, although at least two nodes are available for scheduling.

Impact

An API call outage might occur during node eviction because the virt-api pod becomes a single point of failure.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. Check the number of available virt-api pods:

    1. $ oc get deployment -n $NAMESPACE virt-api \
    2. -o jsonpath='{.status.readyReplicas}'
  3. Check the status of the virt-api deployment for error conditions:

    1. $ oc -n $NAMESPACE get deploy virt-api -o yaml
  4. Check the nodes for issues such as nodes in a NotReady state:

    1. $ oc get nodes

Mitigation

Try to identify the root cause and to resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

LowVirtControllersCount

Meaning

This alert fires when a low number of virt-controller pods is detected. At least one virt-controller pod must be available in order to ensure high availability. The default number of replicas is 2.

A virt-controller device monitors the custom resource definitions (CRDs) of a virtual machine instance (VMI) and manages the associated pods. The device create pods for VMIs and manages the lifecycle of the pods. The device is critical for cluster-wide virtualization functionality.

Impact

The responsiveness of OKD Virtualization might become negatively affected. For example, certain requests might be missed.

In addition, if another virt-launcher instance terminates unexpectedly, OKD Virtualization might become completely unresponsive.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. Verify that running virt-controller pods are available:

    1. $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-controller
  3. Check the virt-launcher logs for error messages:

    1. $ oc -n $NAMESPACE logs <virt-launcher>
  4. Obtain the details of the virt-launcher pod to check for status conditions such as unexpected termination or a NotReady state.

    1. $ oc -n $NAMESPACE describe pod/<virt-launcher>

Mitigation

This alert can have a variety of causes, including:

  • Not enough memory on the cluster

  • Nodes are down

  • The API server is overloaded. For example, the scheduler might be under a heavy load and therefore not completely available.

  • Networking issues

Identify the root cause and fix it, if possible.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

LowVirtOperatorCount

Meaning

This alert fires when only one virt-operator pod in a Ready state has been running for the last 60 minutes.

The virt-operator is the first Operator to start in a cluster. Its primary responsibilities include the following:

  • Installing, live-updating, and live-upgrading a cluster

  • Monitoring the lifecycle of top-level controllers, such as virt-controller, virt-handler, virt-launcher, and managing their reconciliation

  • Certain cluster-wide tasks, such as certificate rotation and infrastructure management

Impact

The virt-operator cannot provide high availability (HA) for the deployment. HA requires two or more virt-operator pods in a Ready state. The default deployment is two pods.

The virt-operator is not directly responsible for virtual machines (VMs) in the cluster. Therefore, its decreased availability does not significantly affect VM workloads.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. Check the states of the virt-operator pods:

    1. $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
  3. Review the logs of the affected virt-operator pods:

    1. $ oc -n $NAMESPACE logs <virt-operator>
  4. Obtain the details of the affected virt-operator pods:

    1. $ oc -n $NAMESPACE describe pod <virt-operator>

Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the Diagnosis procedure.

NetworkAddonsConfigNotReady

Meaning

This alert fires when the NetworkAddonsConfig custom resource (CR) of the Cluster Network Addons Operator (CNAO) is not ready.

CNAO deploys additional networking components on the cluster. This alert indicates that one of the deployed components is not ready.

Impact

Network functionality is affected.

Diagnosis

  1. Check the status conditions of the NetworkAddonsConfig CR to identify the deployment or daemon set that is not ready:

    1. $ oc get networkaddonsconfig \
    2. -o custom-columns="":.status.conditions[*].message

    Example output

    1. DaemonSet "cluster-network-addons/macvtap-cni" update is being processed...
  2. Check the component’s pod for errors:

    1. $ oc -n cluster-network-addons get daemonset <pod> -o yaml
  3. Check the component’s logs:

    1. $ oc -n cluster-network-addons logs <pod>
  4. Check the component’s details for error conditions:

    1. $ oc -n cluster-network-addons describe <pod>

Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

NoLeadingVirtOperator

Meaning

This alert fires when no virt-operator pod with a leader lease has been detected for 10 minutes, although the virt-operator pods are in a Ready state. The alert indicates that no leader pod is available.

The virt-operator is the first Operator to start in a cluster. Its primary responsibilities include the following:

  • Installing, live updating, and live upgrading a cluster

  • Monitoring the lifecycle of top-level controllers, such as virt-controller, virt-handler, virt-launcher, and managing their reconciliation

  • Certain cluster-wide tasks, such as certificate rotation and infrastructure management

The virt-operator deployment has a default replica of 2 pods, with one pod holding a leader lease.

Impact

This alert indicates a failure at the level of the cluster. As a result, critical cluster-wide management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might not be available.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A -o \
    2. custom-columns="":.metadata.namespace)"
  2. Obtain the status of the virt-operator pods:

    1. $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
  3. Check the virt-operator pod logs to determine the leader status:

    1. $ oc -n $NAMESPACE logs | grep lead

    Leader pod example:

    1. {"component":"virt-operator","level":"info","msg":"Attempting to acquire
    2. leader status","pos":"application.go:400","timestamp":"2021-11-30T12:15:18.635387Z"}
    3. I1130 12:15:18.635452 1 leaderelection.go:243] attempting to acquire
    4. leader lease <namespace>/virt-operator...
    5. I1130 12:15:19.216582 1 leaderelection.go:253] successfully acquired
    6. lease <namespace>/virt-operator
    7. {"component":"virt-operator","level":"info","msg":"Started leading",
    8. "pos":"application.go:385","timestamp":"2021-11-30T12:15:19.216836Z"}

    Non-leader pod example:

    1. {"component":"virt-operator","level":"info","msg":"Attempting to acquire
    2. leader status","pos":"application.go:400","timestamp":"2021-11-30T12:15:20.533696Z"}
    3. I1130 12:15:20.533792 1 leaderelection.go:243] attempting to acquire
    4. leader lease <namespace>/virt-operator...
  4. Obtain the details of the affected virt-operator pods:

    1. $ oc -n $NAMESPACE describe pod <virt-operator>

Mitigation

Based on the information obtained during the diagnosis procedure, try to find the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

NoReadyVirtController

Meaning

This alert fires when no available virt-controller devices have been detected for 5 minutes.

The virt-controller devices monitor the custom resource definitions of virtual machine instances (VMIs) and manage the associated pods. The devices create pods for VMIs and manage the lifecycle of the pods.

Therefore, virt-controller devices are critical for all cluster-wide virtualization functionality.

Impact

Any actions related to VM lifecycle management fail. This notably includes launching a new VMI or shutting down an existing VMI.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. Verify the number of virt-controller devices:

    1. $ oc get deployment -n $NAMESPACE virt-controller \
    2. -o jsonpath='{.status.readyReplicas}'
  3. Check the status of the virt-controller deployment:

    1. $ oc -n $NAMESPACE get deploy virt-controller -o yaml
  4. Obtain the details of the virt-controller deployment to check for status conditions such as crashing pods or failure to pull images:

    1. $ oc -n $NAMESPACE describe deploy virt-controller
  5. Obtain the details of the virt-controller pods:

    1. $ get pods -n $NAMESPACE | grep virt-controller
  6. Check the logs of the virt-controller pods for error messages:

    1. $ oc logs -n $NAMESPACE <virt-controller>
  7. Check the nodes for problems, such as a NotReady state:

    1. $ oc get nodes

Mitigation

Based on the information obtained during the diagnosis procedure, try to find the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

NoReadyVirtOperator

Meaning

This alert fires when no virt-operator pod in a Ready state has been detected for 10 minutes.

The virt-operator is the first Operator to start in a cluster. Its primary responsibilities include the following:

  • Installing, live-updating, and live-upgrading a cluster

  • Monitoring the life cycle of top-level controllers, such as virt-controller, virt-handler, virt-launcher, and managing their reconciliation

  • Certain cluster-wide tasks, such as certificate rotation and infrastructure management

The default deployment is two virt-operator pods.

Impact

This alert indicates a cluster-level failure. Critical cluster management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might not be not available.

The virt-operator is not directly responsible for virtual machines in the cluster. Therefore, its temporary unavailability does not significantly affect workloads.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. Obtain the name of the virt-operator deployment:

    1. $ oc -n $NAMESPACE get deploy virt-operator -o yaml
  3. Generate the description of the virt-operator deployment:

    1. $ oc -n $NAMESPACE describe deploy virt-operator
  4. Check for node issues, such as a NotReady state:

    1. $ oc get nodes

Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the Diagnosis procedure.

OrphanedVirtualMachineInstances

Meaning

This alert fires when a virtual machine instance (VMI), or virt-launcher pod, runs on a node that does not have a running virt-handler pod. Such a VMI is called orphaned.

Impact

Orphaned VMIs cannot be managed.

Diagnosis

  1. Check the status of the virt-handler pods to view the nodes on which they are running:

    1. $ oc get pods --all-namespaces -o wide -l kubevirt.io=virt-handler
  2. Check the status of the VMIs to identify VMIs running on nodes that do not have a running virt-handler pod:

    1. $ oc get vmis --all-namespaces
  3. Check the status of the virt-handler daemon:

    1. $ oc get daemonset virt-handler --all-namespaces

    Example output

    1. NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE ...
    2. virt-handler 2 2 2 2 2 ...

    The daemon set is considered healthy if the Desired, Ready, and Available columns contain the same value.

  4. If the virt-handler daemon set is not healthy, check the virt-handler daemon set for pod deployment issues:

    1. $ oc get daemonset virt-handler --all-namespaces -o yaml | jq .status
  5. Check the nodes for issues such as a NotReady status:

    1. $ oc get nodes
  6. Check the spec.workloads stanza of the KubeVirt custom resource (CR) for a workloads placement policy:

    1. $ oc get kubevirt kubevirt --all-namespaces -o yaml

Mitigation

If a workloads placement policy is configured, add the node with the VMI to the policy.

Possible causes for the removal of a virt-handler pod from a node include changes to the node’s taints and tolerations or to a pod’s scheduling rules.

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

OutdatedVirtualMachineInstanceWorkloads

Meaning

This alert fires when running virtual machine instances (VMIs) in outdated virt-launcher pods are detected 24 hours after the OpenShift Virtualization control plane has been updated.

Impact

Outdated VMIs might not have access to new OKD Virtualization features.

Outdated VMIs will not receive the security fixes associated with the virt-launcher pod update.

Diagnosis

  1. Identify the outdated VMIs:

    1. $ oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces
  2. Check the KubeVirt custom resource (CR) to determine whether workloadUpdateMethods is configured in the workloadUpdateStrategy stanza:

    1. $ oc get kubevirt kubevirt --all-namespaces -o yaml
  3. Check each outdated VMI to determine whether it is live-migratable:

    1. $ oc get vmi <vmi> -o yaml

    Example output

    1. apiVersion: kubevirt.io/v1
    2. kind: VirtualMachineInstance
    3. ...
    4. status:
    5. conditions:
    6. - lastProbeTime: null
    7. lastTransitionTime: null
    8. message: cannot migrate VMI which does not use masquerade
    9. to connect to the pod network
    10. reason: InterfaceNotLiveMigratable
    11. status: "False"
    12. type: LiveMigratable

Mitigation

Configuring automated workload updates

Update the HyperConverged CR to enable automatic workload updates.

Stopping a VM associated with a non-live-migratable VMI

  • If a VMI is not live-migratable and if runStrategy: always is set in the corresponding VirtualMachine object, you can update the VMI by manually stopping the virtual machine (VM):

    1. $ virctl stop --namespace <namespace> <vm>

A new VMI spins up immediately in an updated virt-launcher pod to replace the stopped VMI. This is the equivalent of a restart action.

Manually stopping a live-migratable VM is destructive and not recommended because it interrupts the workload.

Migrating a live-migratable VMI

If a VMI is live-migratable, you can update it by creating a VirtualMachineInstanceMigration object that targets a specific running VMI. The VMI is migrated into an updated virt-launcher pod.

  1. Create a VirtualMachineInstanceMigration manifest and save it as migration.yaml:

    1. apiVersion: kubevirt.io/v1
    2. kind: VirtualMachineInstanceMigration
    3. metadata:
    4. name: <migration_name>
    5. namespace: <namespace>
    6. spec:
    7. vmiName: <vmi_name>
  2. Create a VirtualMachineInstanceMigration object to trigger the migration:

    1. $ oc create -f migration.yaml

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

SSPCommonTemplatesModificationReverted

Meaning

This alert fires when the Scheduling, Scale, and Performance (SSP) Operator reverts changes to common templates as part of its reconciliation procedure.

The SSP Operator deploys and reconciles the common templates and the Template Validator. If a user or script changes a common template, the changes are reverted by the SSP Operator.

Impact

Changes to common templates are overwritten.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
    2. awk '{print $1}')"
  2. Check the ssp-operator logs for templates with reverted changes:

    1. $ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator | \
    2. grep 'common template' -C 3

Mitigation

Try to identify and resolve the cause of the changes.

Ensure that changes are made only to copies of templates, and not to the templates themselves.

SSPFailingToReconcile

Meaning

This alert fires when the reconcile cycle of the Scheduling, Scale and Performance (SSP) Operator fails repeatedly, although the SSP Operator is running.

The SSP Operator is responsible for deploying and reconciling the common templates and the Template Validator.

Impact

Dependent components might not be deployed. Changes in the components might not be reconciled. As a result, the common templates or the Template Validator might not be updated or reset if they fail.

Diagnosis

  1. Export the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
    2. awk '{print $1}')"
  2. Obtain the details of the ssp-operator pods:

    1. $ oc -n $NAMESPACE describe pods -l control-plane=ssp-operator
  3. Check the ssp-operator logs for errors:

    1. $ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator
  4. Obtain the status of the virt-template-validator pods:

    1. $ oc -n $NAMESPACE get pods -l name=virt-template-validator
  5. Obtain the details of the virt-template-validator pods:

    1. $ oc -n $NAMESPACE describe pods -l name=virt-template-validator
  6. Check the virt-template-validator logs for errors:

    1. $ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator

Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

SSPHighRateRejectedVms

Meaning

This alert fires when a user or script attempts to create or modify a large number of virtual machines (VMs), using an invalid configuration.

Impact

The VMs are not created or modified. As a result, the environment might not behave as expected.

Diagnosis

  1. Export the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
    2. awk '{print $1}')"
  2. Check the virt-template-validator logs for errors that might indicate the cause:

    1. $ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator

    Example output

    1. {"component":"kubevirt-template-validator","level":"info","msg":"evalution
    2. summary for ubuntu-3166wmdbbfkroku0:\nminimal-required-memory applied: FAIL,
    3. value 1073741824 is lower than minimum [2147483648]\n\nsucceeded=false",
    4. "pos":"admission.go:25","timestamp":"2021-09-28T17:59:10.934470Z"}

Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

SSPOperatorDown

Meaning

This alert fires when all the Scheduling, Scale and Performance (SSP) Operator pods are down.

The SSP Operator is responsible for deploying and reconciling the common templates and the Template Validator.

Impact

Dependent components might not be deployed. Changes in the components might not be reconciled. As a result, the common templates and/or the Template Validator might not be updated or reset if they fail.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
    2. awk '{print $1}')"
  2. Check the status of the ssp-operator pods.

    1. $ oc -n $NAMESPACE get pods -l control-plane=ssp-operator
  3. Obtain the details of the ssp-operator pods:

    1. $ oc -n $NAMESPACE describe pods -l control-plane=ssp-operator
  4. Check the ssp-operator logs for error messages:

    1. $ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator

Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

SSPTemplateValidatorDown

Meaning

This alert fires when all the Template Validator pods are down.

The Template Validator checks virtual machines (VMs) to ensure that they do not violate their templates.

Impact

VMs are not validated against their templates. As a result, VMs might be created with specifications that do not match their respective workloads.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
    2. awk '{print $1}')"
  2. Obtain the status of the virt-template-validator pods:

    1. $ oc -n $NAMESPACE get pods -l name=virt-template-validator
  3. Obtain the details of the virt-template-validator pods:

    1. $ oc -n $NAMESPACE describe pods -l name=virt-template-validator
  4. Check the virt-template-validator logs for error messages:

    1. $ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator

Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

VirtAPIDown

Meaning

This alert fires when all the API Server pods are down.

Impact

OKD Virtualization objects cannot send API calls.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-api pods:

    1. $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
  3. Check the status of the virt-api deployment:

    1. $ oc -n $NAMESPACE get deploy virt-api -o yaml
  4. Check the virt-api deployment details for issues such as crashing pods or image pull failures:

    1. $ oc -n $NAMESPACE describe deploy virt-api
  5. Check for issues such as nodes in a NotReady state:

    1. $ oc get nodes

Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

VirtApiRESTErrorsBurst

Meaning

More than 80% of REST calls have failed in the virt-api pods in the last 5 minutes.

Impact

A very high rate of failed REST calls to virt-api might lead to slow response and execution of API calls, and potentially to API calls being completely dismissed.

However, currently running virtual machine workloads are not likely to be affected.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. Obtain the list of virt-api pods on your deployment:

    1. $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
  3. Check the virt-api logs for error messages:

    1. $ oc logs -n $NAMESPACE <virt-api>
  4. Obtain the details of the virt-api pods:

    1. $ oc describe -n $NAMESPACE <virt-api>
  5. Check if any problems occurred with the nodes. For example, they might be in a NotReady state:

    1. $ oc get nodes
  6. Check the status of the virt-api deployment:

    1. $ oc -n $NAMESPACE get deploy virt-api -o yaml
  7. Obtain the details of the virt-api deployment:

    1. $ oc -n $NAMESPACE describe deploy virt-api

Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

VirtApiRESTErrorsHigh

Meaning

More than 5% of REST calls have failed in the virt-api pods in the last 60 minutes.

Impact

A high rate of failed REST calls to virt-api might lead to slow response and execution of API calls.

However, currently running virtual machine workloads are not likely to be affected.

Diagnosis

  1. Set the NAMESPACE environment variable as follows:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-api pods:

    1. $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
  3. Check the virt-api logs:

    1. $ oc logs -n $NAMESPACE <virt-api>
  4. Obtain the details of the virt-api pods:

    1. $ oc describe -n $NAMESPACE <virt-api>
  5. Check if any problems occurred with the nodes. For example, they might be in a NotReady state:

    1. $ oc get nodes
  6. Check the status of the virt-api deployment:

    1. $ oc -n $NAMESPACE get deploy virt-api -o yaml
  7. Obtain the details of the virt-api deployment:

    1. $ oc -n $NAMESPACE describe deploy virt-api

Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

VirtControllerDown

Meaning

No running virt-controller pod has been detected for 5 minutes.

Impact

Any actions related to virtual machine (VM) lifecycle management fail. This notably includes launching a new virtual machine instance (VMI) or shutting down an existing VMI.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-controller deployment:

    1. $ oc get deployment -n $NAMESPACE virt-controller -o yaml
  3. Review the logs of the virt-controller pod:

    1. $ oc get logs <virt-controller>

Mitigation

This alert can have a variety of causes, including the following:

  • Node resource exhaustion

  • Not enough memory on the cluster

  • Nodes are down

  • The API server is overloaded. For example, the scheduler might be under a heavy load and therefore not completely available.

  • Networking issues

Identify the root cause and fix it, if possible.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

VirtControllerRESTErrorsBurst

Meaning

More than 80% of REST calls in virt-controller pods failed in the last 5 minutes.

The virt-controller has likely fully lost the connection to the API server.

This error is frequently caused by one of the following problems:

  • The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.

  • The virt-controller pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.

Impact

Status updates are not propagated and actions like migrations cannot take place. However, running workloads are not impacted.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. List the available virt-controller pods:

    1. $ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-controller
  3. Check the virt-controller logs for error messages when connecting to the API server:

    1. $ oc logs -n $NAMESPACE <virt-controller>

Mitigation

  • If the virt-controller pod cannot connect to the API server, delete the pod to force a restart:

    1. $ oc delete -n $NAMESPACE <virt-controller>

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

VirtControllerRESTErrorsHigh

Meaning

More than 5% of REST calls failed in virt-controller in the last 60 minutes.

This is most likely because virt-controller has partially lost connection to the API server.

This error is frequently caused by one of the following problems:

  • The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.

  • The virt-controller pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.

Impact

Node-related actions, such as starting and migrating, and scheduling virtual machines, are delayed. Running workloads are not affected, but reporting their current status might be delayed.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. List the available virt-controller pods:

    1. $ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-controller
  3. Check the virt-controller logs for error messages when connecting to the API server:

    1. $ oc logs -n $NAMESPACE <virt-controller>

Mitigation

  • If the virt-controller pod cannot connect to the API server, delete the pod to force a restart:

    1. $ oc delete -n $NAMESPACE <virt-controller>

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

VirtHandlerDaemonSetRolloutFailing

Meaning

The virt-handler daemon set has failed to deploy on one or more worker nodes after 15 minutes.

Impact

This alert is a warning. It does not indicate that all virt-handler daemon sets have failed to deploy. Therefore, the normal lifecycle of virtual machines is not affected unless the cluster is overloaded.

Diagnosis

Identify worker nodes that do not have a running virt-handler pod:

  1. Export the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-handler pods to identify pods that have not deployed:

    1. $ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-handler
  3. Obtain the name of the worker node of the virt-handler pod:

    1. $ oc -n $NAMESPACE get pod <virt-handler> -o jsonpath='{.spec.nodeName}'

Mitigation

If the virt-handler pods failed to deploy because of insufficient resources, you can delete other pods on the affected worker node.

VirtHandlerRESTErrorsBurst

Meaning

More than 80% of REST calls failed in virt-handler in the last 5 minutes. This alert usually indicates that the virt-handler pods cannot connect to the API server.

This error is frequently caused by one of the following problems:

  • The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.

  • The virt-handler pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.

Impact

Status updates are not propagated and node-related actions, such as migrations, fail. However, running workloads on the affected node are not impacted.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-handler pod:

    1. $ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-handler
  3. Check the virt-handler logs for error messages when connecting to the API server:

    1. $ oc logs -n $NAMESPACE <virt-handler>

Mitigation

  • If the virt-handler cannot connect to the API server, delete the pod to force a restart:

    1. $ oc delete -n $NAMESPACE <virt-handler>

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

VirtHandlerRESTErrorsHigh

Meaning

More than 5% of REST calls failed in virt-handler in the last 60 minutes. This alert usually indicates that the virt-handler pods have partially lost connection to the API server.

This error is frequently caused by one of the following problems:

  • The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.

  • The virt-handler pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.

Impact

Node-related actions, such as starting and migrating workloads, are delayed on the node that virt-handler is running on. Running workloads are not affected, but reporting their current status might be delayed.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-handler pod:

    1. $ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-handler
  3. Check the virt-handler logs for error messages when connecting to the API server:

    1. $ oc logs -n $NAMESPACE <virt-handler>

Mitigation

  • If the virt-handler cannot connect to the API server, delete the pod to force a restart:

    1. $ oc delete -n $NAMESPACE <virt-handler>

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

VirtOperatorDown

Meaning

This alert fires when no virt-operator pod in the Running state has been detected for 10 minutes.

The virt-operator is the first Operator to start in a cluster. Its primary responsibilities include the following:

  • Installing, live-updating, and live-upgrading a cluster

  • Monitoring the life cycle of top-level controllers, such as virt-controller, virt-handler, virt-launcher, and managing their reconciliation

  • Certain cluster-wide tasks, such as certificate rotation and infrastructure management

The virt-operator deployment has a default replica of 2 pods.

Impact

This alert indicates a failure at the level of the cluster. Critical cluster-wide management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might not be available.

The virt-operator is not directly responsible for virtual machines (VMs) in the cluster. Therefore, its temporary unavailability does not significantly affect VM workloads.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-operator deployment:

    1. $ oc -n $NAMESPACE get deploy virt-operator -o yaml
  3. Obtain the details of the virt-operator deployment:

    1. $ oc -n $NAMESPACE describe deploy virt-operator
  4. Check the status of the virt-operator pods:

    1. $ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-operator
  5. Check for node issues, such as a NotReady state:

    1. $ oc get nodes

Mitigation

Based on the information obtained during the diagnosis procedure, try to find the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

VirtOperatorRESTErrorsBurst

Meaning

This alert fires when more than 80% of the REST calls in the virt-operator pods failed in the last 5 minutes. This usually indicates that the virt-operator pods cannot connect to the API server.

This error is frequently caused by one of the following problems:

  • The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.

  • The virt-operator pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.

Impact

Cluster-level actions, such as upgrading and controller reconciliation, might not be available.

However, workloads such as virtual machines (VMs) and VM instances (VMIs) are not likely to be affected.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-operator pods:

    1. $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
  3. Check the virt-operator logs for error messages when connecting to the API server:

    1. $ oc -n $NAMESPACE logs <virt-operator>
  4. Obtain the details of the virt-operator pod:

    1. $ oc -n $NAMESPACE describe pod <virt-operator>

Mitigation

  • If the virt-operator pod cannot connect to the API server, delete the pod to force a restart:

    1. $ oc delete -n $NAMESPACE <virt-operator>

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

VirtOperatorRESTErrorsHigh

Meaning

This alert fires when more than 5% of the REST calls in virt-operator pods failed in the last 60 minutes. This usually indicates the virt-operator pods cannot connect to the API server.

This error is frequently caused by one of the following problems:

  • The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.

  • The virt-operator pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.

Impact

Cluster-level actions, such as upgrading and controller reconciliation, might be delayed.

However, workloads such as virtual machines (VMs) and VM instances (VMIs) are not likely to be affected.

Diagnosis

  1. Set the NAMESPACE environment variable:

    1. $ export NAMESPACE="$(oc get kubevirt -A \
    2. -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-operator pods:

    1. $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
  3. Check the virt-operator logs for error messages when connecting to the API server:

    1. $ oc -n $NAMESPACE logs <virt-operator>
  4. Obtain the details of the virt-operator pod:

    1. $ oc -n $NAMESPACE describe pod <virt-operator>

Mitigation

  • If the virt-operator pod cannot connect to the API server, delete the pod to force a restart:

    1. $ oc delete -n $NAMESPACE <virt-operator>

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

VMCannotBeEvicted

Meaning

This alert fires when the eviction strategy of a virtual machine (VM) is set to LiveMigration but the VM is not migratable.

Impact

Non-migratable VMs prevent node eviction. This condition affects operations such as node drain and updates.

Diagnosis

  1. Check the VMI configuration to determine whether the value of evictionStrategy is LiveMigrate:

    1. $ oc get vmis -o yaml
  2. Check for a False status in the LIVE-MIGRATABLE column to identify VMIs that are not migratable:

    1. $ oc get vmis -o wide
  3. Obtain the details of the VMI and check spec.conditions to identify the issue:

    1. $ oc get vmi <vmi> -o yaml

    Example output

    1. status:
    2. conditions:
    3. - lastProbeTime: null
    4. lastTransitionTime: null
    5. message: cannot migrate VMI which does not use masquerade to connect
    6. to the pod network
    7. reason: InterfaceNotLiveMigratable
    8. status: "False"
    9. type: LiveMigratable

Mitigation

Set the evictionStrategy of the VMI to shutdown or resolve the issue that prevents the VMI from migrating.