Configuring PCI passthrough

The Peripheral Component Interconnect (PCI) passthrough feature enables you to access and manage hardware devices from a virtual machine. When PCI passthrough is configured, the PCI devices function as if they were physically attached to the guest operating system.

Cluster administrators can expose and manage host devices that are permitted to be used in the cluster by using the oc command-line interface (CLI).

About preparing a host device for PCI passthrough

To prepare a host device for PCI passthrough by using the CLI, create a MachineConfig object and add kernel arguments to enable the Input-Output Memory Management Unit (IOMMU). Bind the PCI device to the Virtual Function I/O (VFIO) driver and then expose it in the cluster by editing the permittedHostDevices field of the HyperConverged custom resource (CR). The permittedHostDevices list is empty when you first install the OKD Virtualization Operator.

To remove a PCI host device from the cluster by using the CLI, delete the PCI device information from the HyperConverged CR.

Adding kernel arguments to enable the IOMMU driver

To enable the IOMMU (Input-Output Memory Management Unit) driver in the kernel, create the MachineConfig object and add the kernel arguments.

Prerequisites

  • Administrative privilege to a working OKD cluster.

  • Intel or AMD CPU hardware.

  • Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS (Basic Input/Output System) is enabled.

Procedure

  1. Create a MachineConfig object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU.

    1. apiVersion: machineconfiguration.openshift.io/v1
    2. kind: MachineConfig
    3. metadata:
    4. labels:
    5. machineconfiguration.openshift.io/role: worker (1)
    6. name: 100-worker-iommu (2)
    7. spec:
    8. config:
    9. ignition:
    10. version: 3.2.0
    11. kernelArguments:
    12. - intel_iommu=on (3)
    13. ...
    1Applies the new kernel argument only to worker nodes.
    2The name indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as amd_iommu=on.
    3Identifies the kernel argument as intel_iommu for an Intel CPU.
  2. Create the new MachineConfig object:

    1. $ oc create -f 100-worker-kernel-arg-iommu.yaml

Verification

  • Verify that the new MachineConfig object was added.

    1. $ oc get MachineConfig

Binding PCI devices to the VFIO driver

To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values for vendor-ID and device-ID from each device and create a list with the values. Add this list to the MachineConfig object. The MachineConfig Operator generates the /etc/modprobe.d/vfio.conf on the nodes with the PCI devices, and binds the PCI devices to the VFIO driver.

Prerequisites

  • You added kernel arguments to enable IOMMU for the CPU.

Procedure

  1. Run the lspci command to obtain the vendor-ID and the device-ID for the PCI device.

    1. $ lspci -nnv | grep -i nvidia

    Example output

    1. 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)
  2. Create a Butane config file, 100-worker-vfiopci.bu, binding the PCI device to the VFIO driver.

    See “Creating machine configs with Butane” for information about Butane.

    Example

    1. variant: openshift
    2. version: 4.9.0
    3. metadata:
    4. name: 100-worker-vfiopci
    5. labels:
    6. machineconfiguration.openshift.io/role: worker (1)
    7. storage:
    8. files:
    9. - path: /etc/modprobe.d/vfio.conf
    10. mode: 0644
    11. overwrite: true
    12. contents:
    13. inline: |
    14. options vfio-pci ids=10de:1eb8 (2)
    15. - path: /etc/modules-load.d/vfio-pci.conf (3)
    16. mode: 0644
    17. overwrite: true
    18. contents:
    19. inline: vfio-pci
    1Applies the new kernel argument only to worker nodes.
    2Specify the previously determined vendor-ID value (10de) and the device-ID value (1eb8) to bind a single device to the VFIO driver. You can add a list of multiple devices with their vendor and device information.
    3The file that loads the vfio-pci kernel module on the worker nodes.
  3. Use Butane to generate a MachineConfig object file, 100-worker-vfiopci.yaml, containing the configuration to be delivered to the worker nodes:

    1. $ butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml
  4. Apply the MachineConfig object to the worker nodes:

    1. $ oc apply -f 100-worker-vfiopci.yaml
  5. Verify that the MachineConfig object was added.

    1. $ oc get MachineConfig

    Example output

    1. NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE
    2. 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h
    3. 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h
    4. 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h
    5. 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h
    6. 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h
    7. 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h
    8. 100-worker-iommu 3.2.0 30s
    9. 100-worker-vfiopci-configuration 3.2.0 30s

Verification

  • Verify that the VFIO driver is loaded.

    1. $ lspci -nnk -d 10de:

    The output confirms that the VFIO driver is being used.

    Example output

    1. 04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1)
    2. Subsystem: NVIDIA Corporation Device [10de:1eb8]
    3. Kernel driver in use: vfio-pci
    4. Kernel modules: nouveau

Exposing PCI host devices in the cluster using the CLI

To expose PCI host devices in the cluster, add details about the PCI devices to the spec.permittedHostDevices.pciHostDevices array of the HyperConverged custom resource (CR).

Procedure

  1. Edit the HyperConverged CR in your default editor by running the following command:

    1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Add the PCI device information to the spec.permittedHostDevices.pciHostDevices array. For example:

    Example configuration file

    1. apiVersion: hco.kubevirt.io/v1
    2. kind: HyperConverged
    3. metadata:
    4. name: kubevirt-hyperconverged
    5. namespace: openshift-cnv
    6. spec:
    7. permittedHostDevices: (1)
    8. pciHostDevices: (2)
    9. - pciDeviceSelector: "10DE:1DB6" (3)
    10. resourceName: "nvidia.com/GV100GL_Tesla_V100" (4)
    11. - pciDeviceSelector: "10DE:1EB8"
    12. resourceName: "nvidia.com/TU104GL_Tesla_T4"
    13. - pciDeviceSelector: "8086:6F54"
    14. resourceName: "intel.com/qat"
    15. externalResourceProvider: true (5)
    16. ...
    1The host devices that are permitted to be used in the cluster.
    2The list of PCI devices available on the node.
    3The vendor-ID and the device-ID required to identify the PCI device.
    4The name of a PCI host device.
    5Optional: Setting this field to true indicates that the resource is provided by an external device plug-in. OKD Virtualization allows the usage of this device in the cluster but leaves the allocation and monitoring to an external device plug-in.

    The above example snippet shows two PCI host devices that are named nvidia.com/GV100GL_Tesla_V100 and nvidia.com/TU104GL_Tesla_T4 added to the list of permitted host devices in the HyperConverged CR. These devices have been tested and verified to work with OKD Virtualization.

  3. Save your changes and exit the editor.

Verification

  • Verify that the PCI host devices were added to the node by running the following command. The example output shows that there is one device each associated with the nvidia.com/GV100GL_Tesla_V100, nvidia.com/TU104GL_Tesla_T4, and intel.com/qat resource names.

    1. $ oc describe node <node_name>

    Example output

    1. Capacity:
    2. cpu: 64
    3. devices.kubevirt.io/kvm: 110
    4. devices.kubevirt.io/tun: 110
    5. devices.kubevirt.io/vhost-net: 110
    6. ephemeral-storage: 915128Mi
    7. hugepages-1Gi: 0
    8. hugepages-2Mi: 0
    9. memory: 131395264Ki
    10. nvidia.com/GV100GL_Tesla_V100 1
    11. nvidia.com/TU104GL_Tesla_T4 1
    12. intel.com/qat: 1
    13. pods: 250
    14. Allocatable:
    15. cpu: 63500m
    16. devices.kubevirt.io/kvm: 110
    17. devices.kubevirt.io/tun: 110
    18. devices.kubevirt.io/vhost-net: 110
    19. ephemeral-storage: 863623130526
    20. hugepages-1Gi: 0
    21. hugepages-2Mi: 0
    22. memory: 130244288Ki
    23. nvidia.com/GV100GL_Tesla_V100 1
    24. nvidia.com/TU104GL_Tesla_T4 1
    25. intel.com/qat: 1
    26. pods: 250

Removing PCI host devices from the cluster using the CLI

To remove a PCI host device from the cluster, delete the information for that device from the HyperConverged custom resource (CR).

Procedure

  1. Edit the HyperConverged CR in your default editor by running the following command:

    1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Remove the PCI device information from the spec.permittedHostDevices.pciHostDevices array by deleting the pciDeviceSelector, resourceName and externalResourceProvider (if applicable) fields for the appropriate device. In this example, the intel.com/qat resource has been deleted.

    Example configuration file

    1. apiVersion: hco.kubevirt.io/v1
    2. kind: HyperConverged
    3. metadata:
    4. name: kubevirt-hyperconverged
    5. namespace: openshift-cnv
    6. spec:
    7. permittedHostDevices:
    8. pciHostDevices:
    9. - pciDeviceSelector: "10DE:1DB6"
    10. resourceName: "nvidia.com/GV100GL_Tesla_V100"
    11. - pciDeviceSelector: "10DE:1EB8"
    12. resourceName: "nvidia.com/TU104GL_Tesla_T4"
    13. ...
  3. Save your changes and exit the editor.

Verification

  • Verify that the PCI host device was removed from the node by running the following command. The example output shows that there are zero devices associated with the intel.com/qat resource name.

    1. $ oc describe node <node_name>

    Example output

    1. Capacity:
    2. cpu: 64
    3. devices.kubevirt.io/kvm: 110
    4. devices.kubevirt.io/tun: 110
    5. devices.kubevirt.io/vhost-net: 110
    6. ephemeral-storage: 915128Mi
    7. hugepages-1Gi: 0
    8. hugepages-2Mi: 0
    9. memory: 131395264Ki
    10. nvidia.com/GV100GL_Tesla_V100 1
    11. nvidia.com/TU104GL_Tesla_T4 1
    12. intel.com/qat: 0
    13. pods: 250
    14. Allocatable:
    15. cpu: 63500m
    16. devices.kubevirt.io/kvm: 110
    17. devices.kubevirt.io/tun: 110
    18. devices.kubevirt.io/vhost-net: 110
    19. ephemeral-storage: 863623130526
    20. hugepages-1Gi: 0
    21. hugepages-2Mi: 0
    22. memory: 130244288Ki
    23. nvidia.com/GV100GL_Tesla_V100 1
    24. nvidia.com/TU104GL_Tesla_T4 1
    25. intel.com/qat: 0
    26. pods: 250

Configuring virtual machines for PCI passthrough

After the PCI devices have been added to the cluster, you can assign them to virtual machines. The PCI devices are now available as if they are physically connected to the virtual machines.

Assigning a PCI device to a virtual machine

When a PCI device is available in a cluster, you can assign it to a virtual machine and enable PCI passthrough.

Procedure

  • Assign the PCI device to a virtual machine as a host device.

    Example

    1. apiVersion: kubevirt.io/v1
    2. kind: VirtualMachine
    3. spec:
    4. domain:
    5. devices:
    6. hostDevices:
    7. - deviceName: nvidia.com/TU104GL_Tesla_T4 (1)
    8. name: hostdevices1
    1The name of the PCI device that is permitted on the cluster as a host device. The virtual machine can access this host device.

Verification

  • Use the following command to verify that the host device is available from the virtual machine.

    1. $ lspci -nnk | grep NVIDIA

    Example output

    1. $ 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)

Additional resources