Configuring virtual GPUs

If you have graphics processing unit (GPU) cards, OKD Virtualization can automatically create virtual GPUs (vGPUs) that you can assign to virtual machines (VMs).

About using virtual GPUs with OKD Virtualization

Some graphics processing unit (GPU) cards support the creation of virtual GPUs (vGPUs). OKD Virtualization can automatically create vGPUs and other mediated devices if an administrator provides configuration details in the HyperConverged custom resource (CR). This automation is especially useful for large clusters.

Refer to your hardware vendor’s documentation for functionality and support details.

Mediated device

A physical device that is divided into one or more virtual devices. A vGPU is a type of mediated device (mdev); the performance of the physical GPU is divided among the virtual devices. You can assign mediated devices to one or more virtual machines (VMs), but the number of guests must be compatible with your GPU. Some GPUs do not support multiple guests.

Preparing hosts for mediated devices

You must enable the Input-Output Memory Management Unit (IOMMU) driver before you can configure mediated devices.

Adding kernel arguments to enable the IOMMU driver

To enable the IOMMU driver in the kernel, create the MachineConfig object and add the kernel arguments.

Prerequisites

  • You have cluster administrator permissions.

  • Your CPU hardware is Intel or AMD.

  • You enabled Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS.

Procedure

  1. Create a MachineConfig object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU.

    1. apiVersion: machineconfiguration.openshift.io/v1
    2. kind: MachineConfig
    3. metadata:
    4. labels:
    5. machineconfiguration.openshift.io/role: worker (1)
    6. name: 100-worker-iommu (2)
    7. spec:
    8. config:
    9. ignition:
    10. version: 3.2.0
    11. kernelArguments:
    12. - intel_iommu=on (3)
    13. # ...
    1Applies the new kernel argument only to worker nodes.
    2The name indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as amd_iommu=on.
    3Identifies the kernel argument as intel_iommu for an Intel CPU.
  2. Create the new MachineConfig object:

    1. $ oc create -f 100-worker-kernel-arg-iommu.yaml

Verification

  • Verify that the new MachineConfig object was added.

    1. $ oc get MachineConfig

Configuring the NVIDIA GPU Operator

You can use the NVIDIA GPU Operator to provision worker nodes for running GPU-accelerated virtual machines (VMs) in OKD Virtualization.

The NVIDIA GPU Operator is supported only by NVIDIA. For more information, see Obtaining Support from NVIDIA in the Red Hat Knowledgebase.

About using the NVIDIA GPU Operator

You can use the NVIDIA GPU Operator with OKD Virtualization to rapidly provision worker nodes for running GPU-enabled virtual machines (VMs). The NVIDIA GPU Operator manages NVIDIA GPU resources in an OKD cluster and automates tasks that are required when preparing nodes for GPU workloads.

Before you can deploy application workloads to a GPU resource, you must install components such as the NVIDIA drivers that enable the compute unified device architecture (CUDA), Kubernetes device plugin, container runtime, and other features, such as automatic node labeling and monitoring. By automating these tasks, you can quickly scale the GPU capacity of your infrastructure. The NVIDIA GPU Operator can especially facilitate provisioning complex artificial intelligence and machine learning (AI/ML) workloads.

Options for configuring mediated devices

There are two available methods for configuring mediated devices when using the NVIDIA GPU Operator. The method that Red Hat tests uses OKD Virtualization features to schedule mediated devices, while the NVIDIA method only uses the GPU Operator.

Using the NVIDIA GPU Operator to configure mediated devices

This method exclusively uses the NVIDIA GPU Operator to configure mediated devices. To use this method, refer to NVIDIA GPU Operator with OKD Virtualization in the NVIDIA documentation.

Using OKD Virtualization to configure mediated devices

This method, which is tested by Red Hat, uses OKD Virtualization’s capabilities to configure mediated devices. In this case, the NVIDIA GPU Operator is only used for installing drivers with the NVIDIA vGPU Manager. The GPU Operator does not configure mediated devices.

When using the OKD Virtualization method, you still configure the GPU Operator by following the NVIDIA documentation. However, this method differs from the NVIDIA documentation in the following ways:

  • You must not overwrite the default disableMDEVConfiguration: false setting in the HyperConverged custom resource (CR).

    Setting this feature gate as described in the NVIDIA documentation prevents OKD Virtualization from configuring mediated devices.

  • You must configure your ClusterPolicy manifest so that it matches the following example:

    Example manifest

    1. kind: ClusterPolicy
    2. apiVersion: nvidia.com/v1
    3. metadata:
    4. name: gpu-cluster-policy
    5. spec:
    6. operator:
    7. defaultRuntime: crio
    8. use_ocp_driver_toolkit: true
    9. initContainer: {}
    10. sandboxWorkloads:
    11. enabled: true
    12. defaultWorkload: vm-vgpu
    13. driver:
    14. enabled: false (1)
    15. dcgmExporter: {}
    16. dcgm:
    17. enabled: true
    18. daemonsets: {}
    19. devicePlugin: {}
    20. gfd: {}
    21. migManager:
    22. enabled: true
    23. nodeStatusExporter:
    24. enabled: true
    25. mig:
    26. strategy: single
    27. toolkit:
    28. enabled: true
    29. validator:
    30. plugin:
    31. env:
    32. - name: WITH_WORKLOAD
    33. value: "true"
    34. vgpuManager:
    35. enabled: true (2)
    36. repository: <vgpu_container_registry> (3)
    37. image: <vgpu_image_name>
    38. version: nvidia-vgpu-manager
    39. vgpuDeviceManager:
    40. enabled: false (4)
    41. config:
    42. name: vgpu-devices-config
    43. default: default
    44. sandboxDevicePlugin:
    45. enabled: false (5)
    46. vfioManager:
    47. enabled: false (6)
    1Set this value to false. Not required for VMs.
    2Set this value to true. Required for using vGPUs with VMs.
    3Substitute <vgpu_container_registry> with your registry value.
    4Set this value to false to allow OKD Virtualization to configure mediated devices instead of the NVIDIA GPU Operator.
    5Set this value to false to prevent discovery and advertising of the vGPU devices to the kubelet.
    6Set this value to false to prevent loading the vfio-pci driver. Instead, follow the OKD Virtualization documentation to configure PCI passthrough.

Additional resources

How vGPUs are assigned to nodes

For each physical device, OKD Virtualization configures the following values:

  • A single mdev type.

  • The maximum number of instances of the selected mdev type.

The cluster architecture affects how devices are created and assigned to nodes.

Large cluster with multiple cards per node

On nodes with multiple cards that can support similar vGPU types, the relevant device types are created in a round-robin manner. For example:

  1. # ...
  2. mediatedDevicesConfiguration:
  3. mediatedDeviceTypes:
  4. - nvidia-222
  5. - nvidia-228
  6. - nvidia-105
  7. - nvidia-108
  8. # ...

In this scenario, each node has two cards, both of which support the following vGPU types:

  1. nvidia-105
  2. # ...
  3. nvidia-108
  4. nvidia-217
  5. nvidia-299
  6. # ...

On each node, OKD Virtualization creates the following vGPUs:

  • 16 vGPUs of type nvidia-105 on the first card.

  • 2 vGPUs of type nvidia-108 on the second card.

One node has a single card that supports more than one requested vGPU type

OKD Virtualization uses the supported type that comes first on the mediatedDeviceTypes list.

For example, the card on a node card supports nvidia-223 and nvidia-224. The following mediatedDeviceTypes list is configured:

  1. # ...
  2. mediatedDevicesConfiguration:
  3. mediatedDeviceTypes:
  4. - nvidia-22
  5. - nvidia-223
  6. - nvidia-224
  7. # ...

In this example, OKD Virtualization uses the nvidia-223 type.

Managing mediated devices

Before you can assign mediated devices to virtual machines, you must create the devices and expose them to the cluster. You can also reconfigure and remove mediated devices.

Creating and exposing mediated devices

As an administrator, you can create mediated devices and expose them to the cluster by editing the HyperConverged custom resource (CR).

Prerequisites

  • You enabled the Input-Output Memory Management Unit (IOMMU) driver.

  • If your hardware vendor provides drivers, you installed them on the nodes where you want to create mediated devices.

Procedure

  1. Open the HyperConverged CR in your default editor by running the following command:

    1. $ oc edit hyperconverged kubevirt-hyperconverged -n kubevirt-hyperconverged

    Example configuration file with mediated devices configured

    1. apiVersion: hco.kubevirt.io/v1
    2. kind: HyperConverged
    3. metadata:
    4. name: kubevirt-hyperconverged
    5. namespace: kubevirt-hyperconverged
    6. spec:
    7. mediatedDevicesConfiguration:
    8. mediatedDeviceTypes:
    9. - nvidia-231
    10. nodeMediatedDeviceTypes:
    11. - mediatedDeviceTypes:
    12. - nvidia-233
    13. nodeSelector:
    14. kubernetes.io/hostname: node-11.redhat.com
    15. permittedHostDevices:
    16. mediatedDevices:
    17. - mdevNameSelector: GRID T4-2Q
    18. resourceName: nvidia.com/GRID_T4-2Q
    19. - mdevNameSelector: GRID T4-8Q
    20. resourceName: nvidia.com/GRID_T4-8Q
    21. # ...
  2. Create mediated devices by adding them to the spec.mediatedDevicesConfiguration stanza:

    Example YAML snippet

    1. # ...
    2. spec:
    3. mediatedDevicesConfiguration:
    4. mediatedDeviceTypes: (1)
    5. - <device_type>
    6. nodeMediatedDeviceTypes: (2)
    7. - mediatedDeviceTypes: (3)
    8. - <device_type>
    9. nodeSelector: (4)
    10. <node_selector_key>: <node_selector_value>
    11. # ...
    1Required: Configures global settings for the cluster.
    2Optional: Overrides the global configuration for a specific node or group of nodes. Must be used with the global mediatedDeviceTypes configuration.
    3Required if you use nodeMediatedDeviceTypes. Overrides the global mediatedDeviceTypes configuration for the specified nodes.
    4Required if you use nodeMediatedDeviceTypes. Must include a key:value pair.

    Before OKD Virtualization 4.14, the mediatedDeviceTypes field was named mediatedDevicesTypes. Ensure that you use the correct field name when configuring mediated devices.

  3. Identify the name selector and resource name values for the devices that you want to expose to the cluster. You will add these values to the HyperConverged CR in the next step.

    1. Find the resourceName value by running the following command:

      1. $ oc get $NODE -o json \
      2. | jq '.status.allocatable \
      3. | with_entries(select(.key | startswith("nvidia.com/"))) \
      4. | with_entries(select(.value != "0"))'
    2. Find the mdevNameSelector value by viewing the contents of /sys/bus/pci/devices/<slot>:<bus>:<domain>.<function>/mdev_supported_types/<type>/name, substituting the correct values for your system.

      For example, the name file for the nvidia-231 type contains the selector string GRID T4-2Q. Using GRID T4-2Q as the mdevNameSelector value allows nodes to use the nvidia-231 type.

  4. Expose the mediated devices to the cluster by adding the mdevNameSelector and resourceName values to the spec.permittedHostDevices.mediatedDevices stanza of the HyperConverged CR:

    Example YAML snippet

    1. # ...
    2. permittedHostDevices:
    3. mediatedDevices:
    4. - mdevNameSelector: GRID T4-2Q (1)
    5. resourceName: nvidia.com/GRID_T4-2Q (2)
    6. # ...
    1Exposes the mediated devices that map to this value on the host.
    2Matches the resource name that is allocated on the node.
  5. Save your changes and exit the editor.

Verification

  • Optional: Confirm that a device was added to a specific node by running the following command:

    1. $ oc describe node <node_name>

About changing and removing mediated devices

You can reconfigure or remove mediated devices in several ways:

  • Edit the HyperConverged CR and change the contents of the mediatedDeviceTypes stanza.

  • Change the node labels that match the nodeMediatedDeviceTypes node selector.

  • Remove the device information from the spec.mediatedDevicesConfiguration and spec.permittedHostDevices stanzas of the HyperConverged CR.

    If you remove the device information from the spec.permittedHostDevices stanza without also removing it from the spec.mediatedDevicesConfiguration stanza, you cannot create a new mediated device type on the same node. To properly remove mediated devices, remove the device information from both stanzas.

Removing mediated devices from the cluster

To remove a mediated device from the cluster, delete the information for that device from the HyperConverged custom resource (CR).

Procedure

  1. Edit the HyperConverged CR in your default editor by running the following command:

    1. $ oc edit hyperconverged kubevirt-hyperconverged -n kubevirt-hyperconverged
  2. Remove the device information from the spec.mediatedDevicesConfiguration and spec.permittedHostDevices stanzas of the HyperConverged CR. Removing both entries ensures that you can later create a new mediated device type on the same node. For example:

    Example configuration file

    1. apiVersion: hco.kubevirt.io/v1
    2. kind: HyperConverged
    3. metadata:
    4. name: kubevirt-hyperconverged
    5. namespace: kubevirt-hyperconverged
    6. spec:
    7. mediatedDevicesConfiguration:
    8. mediatedDeviceTypes: (1)
    9. - nvidia-231
    10. permittedHostDevices:
    11. mediatedDevices: (2)
    12. - mdevNameSelector: GRID T4-2Q
    13. resourceName: nvidia.com/GRID_T4-2Q
    1To remove the nvidia-231 device type, delete it from the mediatedDeviceTypes array.
    2To remove the GRID T4-2Q device, delete the mdevNameSelector field and its corresponding resourceName field.
  3. Save your changes and exit the editor.

Using mediated devices

You can assign mediated devices to one or more virtual machines.

Assigning a vGPU to a VM by using the CLI

Assign mediated devices such as virtual GPUs (vGPUs) to virtual machines (VMs).

Prerequisites

  • The mediated device is configured in the HyperConverged custom resource.

  • The VM is stopped.

Procedure

  • Assign the mediated device to a virtual machine (VM) by editing the spec.domain.devices.gpus stanza of the VirtualMachine manifest:

    Example virtual machine manifest

    1. apiVersion: kubevirt.io/v1
    2. kind: VirtualMachine
    3. spec:
    4. domain:
    5. devices:
    6. gpus:
    7. - deviceName: nvidia.com/TU104GL_Tesla_T4 (1)
    8. name: gpu1 (2)
    9. - deviceName: nvidia.com/GRID_T4-2Q
    10. name: gpu2
    1The resource name associated with the mediated device.
    2A name to identify the device on the VM.

Verification

  • To verify that the device is available from the virtual machine, run the following command, substituting <device_name> with the deviceName value from the VirtualMachine manifest:

    1. $ lspci -nnk | grep <device_name>

Assigning a vGPU to a VM by using the web console

You can assign virtual GPUs to virtual machines by using the OKD web console.

You can add hardware devices to virtual machines created from customized templates or a YAML file. You cannot add devices to pre-supplied boot source templates for specific operating systems.

Prerequisites

  • The vGPU is configured as a mediated device in your cluster.

    • To view the devices that are connected to your cluster, click ComputeHardware Devices from the side menu.
  • The VM is stopped.

Procedure

  1. In the OKD web console, click VirtualizationVirtualMachines from the side menu.

  2. Select the VM that you want to assign the device to.

  3. On the Details tab, click GPU devices.

  4. Click Add GPU device.

  5. Enter an identifying value in the Name field.

  6. From the Device name list, select the device that you want to add to the VM.

  7. Click Save.

Verification

  • To confirm that the devices were added to the VM, click the YAML tab and review the VirtualMachine configuration. Mediated devices are added to the spec.domain.devices stanza.

Additional resources