Configuring mediated devices

OKD Virtualization automatically creates mediated devices, such as virtual GPUs (vGPUs), if you provide a list of devices in the HyperConverged custom resource (CR).

Prerequisites

  • If your hardware vendor provides drivers, you installed them on the nodes where you want to create mediated devices.

About using virtual GPUs with OKD Virtualization

Some graphics processing unit (GPU) cards support the creation of virtual GPUs (vGPUs). OKD Virtualization can automatically create vGPUs and other mediated devices if an administrator provides configuration details in the HyperConverged custom resource (CR). This automation is especially useful for large clusters.

Refer to your hardware vendor’s documentation for functionality and support details.

Mediated device

A physical device that is divided into one or more virtual devices. A vGPU is a type of mediated device (mdev); the performance of the physical GPU is divided among the virtual devices. You can assign mediated devices to one or more virtual machines (VMs), but the number of guests must be compatible with your GPU. Some GPUs do not support multiple guests.

Configuration overview

When configuring mediated devices, an administrator must:

  • Create the mediated devices.

  • Expose the mediated devices to the cluster.

The HyperConverged CR includes APIs that accomplish both tasks:

Creating mediated devices

  1. ...
  2. spec:
  3. mediatedDevicesConfiguration:
  4. mediatedDevicesTypes: (1)
  5. - <device_type>
  6. nodeMediatedDeviceTypes: (2)
  7. - mediatedDevicesTypes: (3)
  8. - <device_type>
  9. nodeSelector: (4)
  10. <node_selector_key>: <node_selector_value>
  11. ...
1Required: Configures global settings for the cluster.
2Optional: Overrides the global configuration for a specific node or group of nodes. Must be used with the global mediatedDevicesTypes configuration.
3Required if you use nodeMediatedDeviceTypes. Overrides the global mediatedDevicesTypes configuration for select nodes.
4Required if you use nodeMediatedDeviceTypes. Must include a key:value pair.

Exposing mediated devices to the cluster

  1. ...
  2. permittedHostDevices:
  3. mediatedDevices:
  4. - mdevNameSelector: GRID T4-2Q (1)
  5. resourceName: nvidia.com/GRID_T4-2Q
  6. ...
1Exposes the mediated devices that map to this value on the host.

You can see the mediated device types that your device supports by viewing the contents of /sys/bus/pci/devices/<slot>:<bus>:<domain>.<function>/mdev_supported_types/<type>/name, substituting the correct values for your system.

For example, the name file for the nvidia-231 type contains the selector string GRID T4-2Q. Using GRID T4-2Q as the mdevNameSelector value allows nodes to use the nvidia-231 type.

How vGPUs are assigned to nodes

For each physical device, OKD Virtualization configures:

  • A single mdev type.

  • The maximum number of instances of the selected mdev type.

The cluster architecture affects how devices are created and assigned to nodes.

Large cluster with multiple cards per node

On nodes with multiple cards that can support similar vGPU types, the relevant device types are created in a round-robin manner. For example:

  1. ...
  2. mediatedDevicesConfiguration:
  3. mediatedDevicesTypes:
  4. - nvidia-222
  5. - nvidia-228
  6. - nvidia-105
  7. - nvidia-108
  8. ...

In this scenario, each node has two cards, both of which support the following vGPU types:

  1. nvidia-105
  2. ...
  3. nvidia-108
  4. nvidia-217
  5. nvidia-299
  6. ...

On each node, OKD Virtualization creates:

  • 16 vGPUs of type nvidia-105 on the first card.

  • 2 vGPUs of type nvidia-108 on the second card.

One node has a single card that supports more than one requested vGPU type

OKD Virtualization uses the supported type that comes first on the mediatedDevicesTypes list.

For example, a node’s card supports nvidia-223 and nvidia-224. The following mediatedDevicesTypes list is configured:

  1. ...
  2. mediatedDevicesConfiguration:
  3. mediatedDevicesTypes:
  4. - nvidia-22
  5. - nvidia-223
  6. - nvidia-224
  7. ...

In this example, OKD Virtualization uses the nvidia-223 type.

About changing and removing mediated devices

OKD Virtualization updates the cluster’s mediated device configuration if:

  • You edit the HyperConverged CR and change the contents of the mediatedDevicesTypes stanza.

  • You change the node labels that match the nodeMediatedDeviceTypes node selector.

  • You remove the device information from the spec.mediatedDevicesConfiguration and spec.permittedHostDevices stanzas of the HyperConverged CR.

    If you remove the device information from the spec.permittedHostDevices stanza without also removing it from the spec.mediatedDevicesConfiguration stanza, you cannot create a new mediated device type on the same node. To properly remove mediated devices, remove the device information from both stanzas.

Depending on the specific changes, these actions cause OKD Virtualization to reconfigure mediated devices or remove them from the cluster nodes.

Preparing hosts for mediated devices

You must enable the IOMMU (Input-Output Memory Management Unit) driver before you can configure mediated devices.

Adding kernel arguments to enable the IOMMU driver

To enable the IOMMU (Input-Output Memory Management Unit) driver in the kernel, create the MachineConfig object and add the kernel arguments.

Prerequisites

  • Administrative privilege to a working OKD cluster.

  • Intel or AMD CPU hardware.

  • Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS (Basic Input/Output System) is enabled.

Procedure

  1. Create a MachineConfig object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU.

    1. apiVersion: machineconfiguration.openshift.io/v1
    2. kind: MachineConfig
    3. metadata:
    4. labels:
    5. machineconfiguration.openshift.io/role: worker (1)
    6. name: 100-worker-iommu (2)
    7. spec:
    8. config:
    9. ignition:
    10. version: 3.2.0
    11. kernelArguments:
    12. - intel_iommu=on (3)
    13. ...
    1Applies the new kernel argument only to worker nodes.
    2The name indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as amd_iommu=on.
    3Identifies the kernel argument as intel_iommu for an Intel CPU.
  2. Create the new MachineConfig object:

    1. $ oc create -f 100-worker-kernel-arg-iommu.yaml

Verification

  • Verify that the new MachineConfig object was added.

    1. $ oc get MachineConfig

Adding and removing mediated devices

Creating and exposing mediated devices

You can expose and create mediated devices such as virtual GPUs (vGPUs) by editing the HyperConverged custom resource (CR).

Prerequisites

  • You enabled the IOMMU (Input-Output Memory Management Unit) driver.

Procedure

  1. Edit the HyperConverged CR in your default editor by running the following command:

    1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Add the mediated device information to the HyperConverged CR spec, ensuring that you include the mediatedDevicesConfiguration and permittedHostDevices stanzas. For example:

    Example configuration file

    1. apiVersion: hco.kubevirt.io/v1
    2. kind: HyperConverged
    3. metadata:
    4. name: kubevirt-hyperconverged
    5. namespace: openshift-cnv
    6. spec:
    7. mediatedDevicesConfiguration: (1)
    8. mediatedDevicesTypes: (2)
    9. - nvidia-231
    10. nodeMediatedDeviceTypes: (3)
    11. - mediatedDevicesTypes: (4)
    12. - nvidia-233
    13. nodeSelector:
    14. kubernetes.io/hostname: node-11.redhat.com
    15. permittedHostDevices: (5)
    16. mediatedDevices:
    17. - mdevNameSelector: GRID T4-2Q
    18. resourceName: nvidia.com/GRID_T4-2Q
    19. - mdevNameSelector: GRID T4-8Q
    20. resourceName: nvidia.com/GRID_T4-8Q
    21. ...
    1Creates mediated devices.
    2Required: Global mediatedDevicesTypes configuration.
    3Optional: Overrides the global configuration for specific nodes.
    4Required if you use nodeMediatedDeviceTypes.
    5Exposes mediated devices to the cluster.
  3. Save your changes and exit the editor.

Verification

  • You can verify that a device was added to a specific node by running the following command:

    1. $ oc describe node <node_name>

Removing mediated devices from the cluster using the CLI

To remove a mediated device from the cluster, delete the information for that device from the HyperConverged custom resource (CR).

Procedure

  1. Edit the HyperConverged CR in your default editor by running the following command:

    1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Remove the device information from the spec.mediatedDevicesConfiguration and spec.permittedHostDevices stanzas of the HyperConverged CR. Removing both entries ensures that you can later create a new mediated device type on the same node. For example:

    Example configuration file

    1. apiVersion: hco.kubevirt.io/v1
    2. kind: HyperConverged
    3. metadata:
    4. name: kubevirt-hyperconverged
    5. namespace: openshift-cnv
    6. spec:
    7. mediatedDevicesConfiguration:
    8. mediatedDevicesTypes: (1)
    9. - nvidia-231
    10. permittedHostDevices:
    11. mediatedDevices: (2)
    12. - mdevNameSelector: GRID T4-2Q
    13. resourceName: nvidia.com/GRID_T4-2Q
    1To remove the nvidia-231 device type, delete it from the mediatedDevicesTypes array.
    2To remove the GRID T4-2Q device, delete the mdevNameSelector field and its corresponding resourceName field.
  3. Save your changes and exit the editor.

Assigning a mediated device to a virtual machine

Assign mediated devices such as virtual GPUs (vGPUs) to virtual machines.

Prerequisites

  • The mediated device is configured in the HyperConverged custom resource.

Procedure

  • Assign the mediated device to a virtual machine (VM) by editing the spec.domain.devices.gpus stanza of the VirtualMachine manifest:

    Example virtual machine manifest

    1. apiVersion: kubevirt.io/v1
    2. kind: VirtualMachine
    3. spec:
    4. domain:
    5. devices:
    6. gpus:
    7. - deviceName: nvidia.com/TU104GL_Tesla_T4 (1)
    8. name: gpu1 (2)
    9. - deviceName: nvidia.com/GRID_T4-1Q
    10. name: gpu2
    1The resource name associated with the mediated device.
    2A name to identify the device on the VM.

Verification

  • To verify that the device is available from the virtual machine, run the following command, substituting <device_name> with the deviceName value from the VirtualMachine manifest:

    1. $ lspci -nnk | grep <device_name>

Additional resources