Using virtual functions (VFs) with DPDK and RDMA modes

You can use Single Root I/O Virtualization (SR-IOV) network hardware with the Data Plane Development Kit (DPDK) and with remote direct memory access (RDMA).

The Data Plane Development Kit (DPDK) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Using a virtual function in DPDK mode with an Intel NIC

Prerequisites

  • Install the OpenShift CLI (oc).

  • Install the SR-IOV Network Operator.

  • Log in as a user with cluster-admin privileges.

Procedure

  1. Create the following SriovNetworkNodePolicy object, and then save the YAML in the intel-dpdk-node-policy.yaml file.

    1. apiVersion: sriovnetwork.openshift.io/v1
    2. kind: SriovNetworkNodePolicy
    3. metadata:
    4. name: intel-dpdk-node-policy
    5. namespace: openshift-sriov-network-operator
    6. spec:
    7. resourceName: intelnics
    8. nodeSelector:
    9. feature.node.kubernetes.io/network-sriov.capable: "true"
    10. priority: <priority>
    11. numVfs: <num>
    12. nicSelector:
    13. vendor: "8086"
    14. deviceID: "158b"
    15. pfNames: ["<pf_name>", ...]
    16. rootDevices: ["<pci_bus_id>", "..."]
    17. deviceType: vfio-pci (1)
    1Specify the driver type for the virtual functions to vfio-pci.

    Please refer to the Configuring SR-IOV network devices section for a detailed explanation on each option in SriovNetworkNodePolicy.

    When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes. It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand.

    After the configuration update is applied, all the pods in openshift-sriov-network-operator namespace will change to a Running status.

  2. Create the SriovNetworkNodePolicy object by running the following command:

    1. $ oc create -f intel-dpdk-node-policy.yaml
  3. Create the following SriovNetwork object, and then save the YAML in the intel-dpdk-network.yaml file.

    1. apiVersion: sriovnetwork.openshift.io/v1
    2. kind: SriovNetwork
    3. metadata:
    4. name: intel-dpdk-network
    5. namespace: openshift-sriov-network-operator
    6. spec:
    7. networkNamespace: <target_namespace>
    8. ipam: "{}" (1)
    9. vlan: <vlan>
    10. resourceName: intelnics
    1Specify an empty object “{}” for the ipam CNI plug-in. DPDK works in userspace mode and does not require an IP address.

    See the “Configuring SR-IOV additional network” section for a detailed explanation on each option in SriovNetwork.

  4. Create the SriovNetwork object by running the following command:

    1. $ oc create -f intel-dpdk-network.yaml
  5. Create the following Pod spec, and then save the YAML in the intel-dpdk-pod.yaml file.

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: dpdk-app
    5. namespace: <target_namespace> (1)
    6. annotations:
    7. k8s.v1.cni.cncf.io/networks: intel-dpdk-network
    8. spec:
    9. containers:
    10. - name: testpmd
    11. image: <DPDK_image> (2)
    12. securityContext:
    13. runAsUser: 0
    14. capabilities:
    15. add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] (3)
    16. volumeMounts:
    17. - mountPath: /dev/hugepages (4)
    18. name: hugepage
    19. resources:
    20. limits:
    21. openshift.io/intelnics: "1" (5)
    22. memory: "1Gi"
    23. cpu: "4" (6)
    24. hugepages-1Gi: "4Gi" (7)
    25. requests:
    26. openshift.io/intelnics: "1"
    27. memory: "1Gi"
    28. cpu: "4"
    29. hugepages-1Gi: "4Gi"
    30. command: ["sleep", "infinity"]
    31. volumes:
    32. - name: hugepage
    33. emptyDir:
    34. medium: HugePages
    1Specify the same target_namespace where the SriovNetwork object intel-dpdk-network is created. If you would like to create the pod in a different namespace, change target_namespace in both the Pod spec and the SriovNetowrk object.
    2Specify the DPDK image which includes your application and the DPDK library used by application.
    3Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access.
    4Mount a hugepage volume to the DPDK pod under /dev/hugepages. The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages.
    5Optional: Specify the number of DPDK devices allocated to DPDK pod. This resource request and limit, if not explicitly specified, will be automatically added by the SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by the SR-IOV Operator. It is enabled by default and can be disabled by setting enableInjector option to false in the default SriovOperatorConfig CR.
    6Specify the number of CPUs. The DPDK pod usually requires exclusive CPUs to be allocated from the kubelet. This is achieved by setting CPU Manager policy to static and creating a pod with Guaranteed QoS.
    7Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to the DPDK pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepage requires adding kernel arguments to Nodes. For example, adding kernel arguments default_hugepagesz=1GB, hugepagesz=1G and hugepages=16 will result in 16*1Gi hugepages be allocated during system boot.
  6. Create the DPDK pod by running the following command:

    1. $ oc create -f intel-dpdk-pod.yaml

Using a virtual function in DPDK mode with a Mellanox NIC

Prerequisites

  • Install the OpenShift CLI (oc).

  • Install the SR-IOV Network Operator.

  • Log in as a user with cluster-admin privileges.

Procedure

  1. Create the following SriovNetworkNodePolicy object, and then save the YAML in the mlx-dpdk-node-policy.yaml file.

    1. apiVersion: sriovnetwork.openshift.io/v1
    2. kind: SriovNetworkNodePolicy
    3. metadata:
    4. name: mlx-dpdk-node-policy
    5. namespace: openshift-sriov-network-operator
    6. spec:
    7. resourceName: mlxnics
    8. nodeSelector:
    9. feature.node.kubernetes.io/network-sriov.capable: "true"
    10. priority: <priority>
    11. numVfs: <num>
    12. nicSelector:
    13. vendor: "15b3"
    14. deviceID: "1015" (1)
    15. pfNames: ["<pf_name>", ...]
    16. rootDevices: ["<pci_bus_id>", "..."]
    17. deviceType: netdevice (2)
    18. isRdma: true (3)
    1Specify the device hex code of the SR-IOV network device. The only allowed values for Mellanox cards are 1015, 1017.
    2Specify the driver type for the virtual functions to netdevice. Mellanox SR-IOV VF can work in DPDK mode without using the vfio-pci device type. VF device appears as a kernel network interface inside a container.
    3Enable RDMA mode. This is required by Mellanox cards to work in DPDK mode.

    Please refer to Configuring SR-IOV network devices section for detailed explanation on each option in SriovNetworkNodePolicy.

    When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes. It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand.

    After the configuration update is applied, all the pods in the openshift-sriov-network-operator namespace will change to a Running status.

  2. Create the SriovNetworkNodePolicy object by running the following command:

    1. $ oc create -f mlx-dpdk-node-policy.yaml
  3. Create the following SriovNetwork object, and then save the YAML in the mlx-dpdk-network.yaml file.

    1. apiVersion: sriovnetwork.openshift.io/v1
    2. kind: SriovNetwork
    3. metadata:
    4. name: mlx-dpdk-network
    5. namespace: openshift-sriov-network-operator
    6. spec:
    7. networkNamespace: <target_namespace>
    8. ipam: |- (1)
    9. ...
    10. vlan: <vlan>
    11. resourceName: mlxnics
    1Specify a configuration object for the ipam CNI plug-in as a YAML block scalar. The plug-in manages IP address assignment for the attachment definition.

    See the “Configuring SR-IOV additional network” section for a detailed explanation on each option in SriovNetwork.

  4. Create the SriovNetworkNodePolicy object by running the following command:

    1. $ oc create -f mlx-dpdk-network.yaml
  5. Create the following Pod spec, and then save the YAML in the mlx-dpdk-pod.yaml file.

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: dpdk-app
    5. namespace: <target_namespace> (1)
    6. annotations:
    7. k8s.v1.cni.cncf.io/networks: mlx-dpdk-network
    8. spec:
    9. containers:
    10. - name: testpmd
    11. image: <DPDK_image> (2)
    12. securityContext:
    13. runAsUser: 0
    14. capabilities:
    15. add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] (3)
    16. volumeMounts:
    17. - mountPath: /dev/hugepages (4)
    18. name: hugepage
    19. resources:
    20. limits:
    21. openshift.io/mlxnics: "1" (5)
    22. memory: "1Gi"
    23. cpu: "4" (6)
    24. hugepages-1Gi: "4Gi" (7)
    25. requests:
    26. openshift.io/mlxnics: "1"
    27. memory: "1Gi"
    28. cpu: "4"
    29. hugepages-1Gi: "4Gi"
    30. command: ["sleep", "infinity"]
    31. volumes:
    32. - name: hugepage
    33. emptyDir:
    34. medium: HugePages
    1Specify the same target_namespace where SriovNetwork object mlx-dpdk-network is created. If you would like to create the pod in a different namespace, change target_namespace in both Pod spec and SriovNetowrk object.
    2Specify the DPDK image which includes your application and the DPDK library used by application.
    3Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access.
    4Mount the hugepage volume to the DPDK pod under /dev/hugepages. The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages.
    5Optional: Specify the number of DPDK devices allocated to the DPDK pod. This resource request and limit, if not explicitly specified, will be automatically added by SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by SR-IOV Operator. It is enabled by default and can be disabled by setting the enableInjector option to false in the default SriovOperatorConfig CR.
    6Specify the number of CPUs. The DPDK pod usually requires exclusive CPUs be allocated from kubelet. This is achieved by setting CPU Manager policy to static and creating a pod with Guaranteed QoS.
    7Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to DPDK pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepage requires adding kernel arguments to Nodes.
  6. Create the DPDK pod by running the following command:

    1. $ oc create -f mlx-dpdk-pod.yaml

Using a virtual function in RDMA mode with a Mellanox NIC

RDMA over Converged Ethernet (RoCE) is the only supported mode when using RDMA on OKD.

Prerequisites

  • Install the OpenShift CLI (oc).

  • Install the SR-IOV Network Operator.

  • Log in as a user with cluster-admin privileges.

Procedure

  1. Create the following SriovNetworkNodePolicy object, and then save the YAML in the mlx-rdma-node-policy.yaml file.

    1. apiVersion: sriovnetwork.openshift.io/v1
    2. kind: SriovNetworkNodePolicy
    3. metadata:
    4. name: mlx-rdma-node-policy
    5. namespace: openshift-sriov-network-operator
    6. spec:
    7. resourceName: mlxnics
    8. nodeSelector:
    9. feature.node.kubernetes.io/network-sriov.capable: "true"
    10. priority: <priority>
    11. numVfs: <num>
    12. nicSelector:
    13. vendor: "15b3"
    14. deviceID: "1015" (1)
    15. pfNames: ["<pf_name>", ...]
    16. rootDevices: ["<pci_bus_id>", "..."]
    17. deviceType: netdevice (2)
    18. isRdma: true (3)
    1Specify the device hex code of SR-IOV network device. The only allowed values for Mellanox cards are 1015, 1017.
    2Specify the driver type for the virtual functions to netdevice.
    3Enable RDMA mode.

    Please refer to the Configuring SR-IOV network devices section for a detailed explanation on each option in SriovNetworkNodePolicy.

    When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes. It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand.

    After the configuration update is applied, all the pods in the openshift-sriov-network-operator namespace will change to a Running status.

  2. Create the SriovNetworkNodePolicy object by running the following command:

    1. $ oc create -f mlx-rdma-node-policy.yaml
  3. Create the following SriovNetwork object, and then save the YAML in the mlx-rdma-network.yaml file.

    1. apiVersion: sriovnetwork.openshift.io/v1
    2. kind: SriovNetwork
    3. metadata:
    4. name: mlx-rdma-network
    5. namespace: openshift-sriov-network-operator
    6. spec:
    7. networkNamespace: <target_namespace>
    8. ipam: |- (1)
    9. ...
    10. vlan: <vlan>
    11. resourceName: mlxnics
    1Specify a configuration object for the ipam CNI plug-in as a YAML block scalar. The plug-in manages IP address assignment for the attachment definition.

    See the “Configuring SR-IOV additional network” section for a detailed explanation on each option in SriovNetwork.

  4. Create the SriovNetworkNodePolicy object by running the following command:

    1. $ oc create -f mlx-rdma-network.yaml
  5. Create the following Pod spec, and then save the YAML in the mlx-rdma-pod.yaml file.

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: rdma-app
    5. namespace: <target_namespace> (1)
    6. annotations:
    7. k8s.v1.cni.cncf.io/networks: mlx-rdma-network
    8. spec:
    9. containers:
    10. - name: testpmd
    11. image: <RDMA_image> (2)
    12. securityContext:
    13. runAsUser: 0
    14. capabilities:
    15. add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] (3)
    16. volumeMounts:
    17. - mountPath: /dev/hugepages (4)
    18. name: hugepage
    19. resources:
    20. limits:
    21. memory: "1Gi"
    22. cpu: "4" (5)
    23. hugepages-1Gi: "4Gi" (6)
    24. requests:
    25. memory: "1Gi"
    26. cpu: "4"
    27. hugepages-1Gi: "4Gi"
    28. command: ["sleep", "infinity"]
    29. volumes:
    30. - name: hugepage
    31. emptyDir:
    32. medium: HugePages
    1Specify the same target_namespace where SriovNetwork object mlx-rdma-network is created. If you would like to create the pod in a different namespace, change target_namespace in both Pod spec and SriovNetowrk object.
    2Specify the RDMA image which includes your application and RDMA library used by application.
    3Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access.
    4Mount the hugepage volume to RDMA pod under /dev/hugepages. The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages.
    5Specify number of CPUs. The RDMA pod usually requires exclusive CPUs be allocated from the kubelet. This is achieved by setting CPU Manager policy to static and create pod with Guaranteed QoS.
    6Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to the RDMA pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepage requires adding kernel arguments to Nodes.
  6. Create the RDMA pod by running the following command:

    1. $ oc create -f mlx-rdma-pod.yaml