Postinstallation network configuration

By default, OKD Virtualization is installed with a single, internal pod network.

After you install OKD Virtualization, you can install networking Operators and configure additional networks.

Installing networking Operators

You must install the Kubernetes NMState Operator to configure a Linux bridge network for live migration or external access to virtual machines (VMs).

You can install the SR-IOV Operator to manage SR-IOV network devices and network attachments.

Installing the Kubernetes NMState Operator by using the web console

You can install the Kubernetes NMState Operator by using the web console. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes.

Prerequisites

  • You are logged in as a user with cluster-admin privileges.

Procedure

  1. Select OperatorsOperatorHub.

  2. In the search field below All Items, enter nmstate and click Enter to search for the Kubernetes NMState Operator.

  3. Click on the Kubernetes NMState Operator search result.

  4. Click on Install to open the Install Operator window.

  5. Click Install to install the Operator.

  6. After the Operator finishes installing, click View Operator.

  7. Under Provided APIs, click Create Instance to open the dialog box for creating an instance of kubernetes-nmstate.

  8. In the Name field of the dialog box, ensure the name of the instance is nmstate.

    The name restriction is a known issue. The instance is a singleton for the entire cluster.

  9. Accept the default settings and click Create to create the instance.

Summary

Once complete, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes.

Installing the SR-IOV Network Operator

As a cluster administrator, you can install the Single Root I/O Virtualization (SR-IOV) Network Operator by using the OKD CLI or the web console.

CLI: Installing the SR-IOV Network Operator

As a cluster administrator, you can install the Operator using the CLI.

Prerequisites

  • A cluster installed on bare-metal hardware with nodes that have hardware that supports SR-IOV.

  • Install the OpenShift CLI (oc).

  • An account with cluster-admin privileges.

Procedure

  1. To create the openshift-sriov-network-operator namespace, enter the following command:

    1. $ cat << EOF| oc create -f -
    2. apiVersion: v1
    3. kind: Namespace
    4. metadata:
    5. name: openshift-sriov-network-operator
    6. annotations:
    7. workload.openshift.io/allowed: management
    8. EOF
  2. To create an OperatorGroup CR, enter the following command:

    1. $ cat << EOF| oc create -f -
    2. apiVersion: operators.coreos.com/v1
    3. kind: OperatorGroup
    4. metadata:
    5. name: sriov-network-operators
    6. namespace: openshift-sriov-network-operator
    7. spec:
    8. targetNamespaces:
    9. - openshift-sriov-network-operator
    10. EOF
  3. Subscribe to the SR-IOV Network Operator.

    1. Run the following command to get the OKD major and minor version. It is required for the channel value in the next step.

      1. $ OC_VERSION=$(oc version -o yaml | grep openshiftVersion | \
      2. grep -o '[0-9]*[.][0-9]*' | head -1)
    2. To create a Subscription CR for the SR-IOV Network Operator, enter the following command:

      1. $ cat << EOF| oc create -f -
      2. apiVersion: operators.coreos.com/v1alpha1
      3. kind: Subscription
      4. metadata:
      5. name: sriov-network-operator-subscription
      6. namespace: openshift-sriov-network-operator
      7. spec:
      8. channel: "${OC_VERSION}"
      9. name: sriov-network-operator
      10. source: redhat-operators
      11. sourceNamespace: openshift-marketplace
      12. EOF
  4. To verify that the Operator is installed, enter the following command:

    1. $ oc get csv -n openshift-sriov-network-operator \
    2. -o custom-columns=Name:.metadata.name,Phase:.status.phase

    Example output

    1. Name Phase
    2. sriov-network-operator.4.0-202310121402 Succeeded

Web console: Installing the SR-IOV Network Operator

As a cluster administrator, you can install the Operator using the web console.

Prerequisites

  • A cluster installed on bare-metal hardware with nodes that have hardware that supports SR-IOV.

  • Install the OpenShift CLI (oc).

  • An account with cluster-admin privileges.

Procedure

  1. Install the SR-IOV Network Operator:

    1. In the OKD web console, click OperatorsOperatorHub.

    2. Select SR-IOV Network Operator from the list of available Operators, and then click Install.

    3. On the Install Operator page, under Installed Namespace, select Operator recommended Namespace.

    4. Click Install.

  2. Verify that the SR-IOV Network Operator is installed successfully:

    1. Navigate to the OperatorsInstalled Operators page.

    2. Ensure that SR-IOV Network Operator is listed in the openshift-sriov-network-operator project with a Status of InstallSucceeded.

      During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.

      If the Operator does not appear as installed, to troubleshoot further:

      • Inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.

      • Navigate to the WorkloadsPods page and check the logs for pods in the openshift-sriov-network-operator project.

      • Check the namespace of the YAML file. If the annotation is missing, you can add the annotation workload.openshift.io/allowed=management to the Operator namespace with the following command:

        1. $ oc annotate ns/openshift-sriov-network-operator workload.openshift.io/allowed=management

        For single-node OpenShift clusters, the annotation workload.openshift.io/allowed=management is required for the namespace.

Configuring a Linux bridge network

After you install the Kubernetes NMState Operator, you can configure a Linux bridge network for live migration or external access to virtual machines (VMs).

Creating a Linux bridge NNCP

You can create a NodeNetworkConfigurationPolicy (NNCP) manifest for a Linux bridge network.

Prerequisites

  • You have installed the Kubernetes NMState Operator.

Procedure

  • Create the NodeNetworkConfigurationPolicy manifest. This example includes sample values that you must replace with your own information.

    1. apiVersion: nmstate.io/v1
    2. kind: NodeNetworkConfigurationPolicy
    3. metadata:
    4. name: br1-eth1-policy (1)
    5. spec:
    6. desiredState:
    7. interfaces:
    8. - name: br1 (2)
    9. description: Linux bridge with eth1 as a port (3)
    10. type: linux-bridge (4)
    11. state: up (5)
    12. ipv4:
    13. enabled: false (6)
    14. bridge:
    15. options:
    16. stp:
    17. enabled: false (7)
    18. port:
    19. - name: eth1 (8)
    1Name of the policy.
    2Name of the interface.
    3Optional: Human-readable description of the interface.
    4The type of interface. This example creates a bridge.
    5The requested state for the interface after creation.
    6Disables IPv4 in this example.
    7Disables STP in this example.
    8The node NIC to which the bridge is attached.

Creating a Linux bridge NAD by using the web console

You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines by using the OKD web console.

A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN.

Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported.

Procedure

  1. In the web console, click NetworkingNetworkAttachmentDefinitions.

  2. Click Create Network Attachment Definition.

    The network attachment definition must be in the same namespace as the pod or virtual machine.

  3. Enter a unique Name and optional Description.

  4. Select CNV Linux bridge from the Network Type list.

  5. Enter the name of the bridge in the Bridge Name field.

  6. Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field.

  7. Optional: Select MAC Spoof Check to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod.

  8. Click Create.

Next steps

Configuring a network for live migration

After you have configured a Linux bridge network, you can configure a dedicated network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration.

Configuring a dedicated secondary network for live migration

To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the NetworkAttachmentDefinition object to the HyperConverged custom resource (CR).

Prerequisites

  • You installed the OpenShift CLI (oc).

  • You logged in to the cluster as a user with the cluster-admin role.

  • Each node has at least two Network Interface Cards (NICs).

  • The NICs for live migration are connected to the same VLAN.

Procedure

  1. Create a NetworkAttachmentDefinition manifest according to the following example:

    Example configuration file

    1. apiVersion: "k8s.cni.cncf.io/v1"
    2. kind: NetworkAttachmentDefinition
    3. metadata:
    4. name: my-secondary-network (1)
    5. namespace: kubevirt-hyperconverged (2)
    6. spec:
    7. config: '{
    8. "cniVersion": "0.3.1",
    9. "name": "migration-bridge",
    10. "type": "macvlan",
    11. "master": "eth1", (2)
    12. "mode": "bridge",
    13. "ipam": {
    14. "type": "whereabouts", (3)
    15. "range": "10.200.5.0/24" (4)
    16. }
    17. }'
    1Specify the name of the NetworkAttachmentDefinition object.
    2Specify the name of the NIC to be used for live migration.
    3Specify the name of the CNI plugin that provides the network for the NAD.
    4Specify an IP address range for the secondary network. This range must not overlap the IP addresses of the main network.
  2. Open the HyperConverged CR in your default editor by running the following command:

    1. oc edit hyperconverged kubevirt-hyperconverged -n kubevirt-hyperconverged
  3. Add the name of the NetworkAttachmentDefinition object to the spec.liveMigrationConfig stanza of the HyperConverged CR:

    Example HyperConverged manifest

    1. apiVersion: hco.kubevirt.io/v1beta1
    2. kind: HyperConverged
    3. metadata:
    4. name: kubevirt-hyperconverged
    5. spec:
    6. liveMigrationConfig:
    7. completionTimeoutPerGiB: 800
    8. network: <network> (1)
    9. parallelMigrationsPerCluster: 5
    10. parallelOutboundMigrationsPerNode: 2
    11. progressTimeout: 150
    12. # ...
    1Specify the name of the Multus NetworkAttachmentDefinition object to be used for live migrations.
  4. Save your changes and exit the editor. The virt-handler pods restart and connect to the secondary network.

Verification

  • When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata.

    1. $ oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'

Selecting a dedicated network by using the web console

You can select a dedicated network for live migration by using the OKD web console.

Prerequisites

  • You configured a Multus network for live migration.

Procedure

  1. Navigate to Virtualization > Overview in the OKD web console.

  2. Click the Settings tab and then click Live migration.

  3. Select the network from the Live migration network list.

Configuring an SR-IOV network

After you install the SR-IOV Operator, you can configure an SR-IOV network.

Configuring SR-IOV network devices

The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io CustomResourceDefinition to OKD. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR).

When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes.

It might take several minutes for a configuration change to apply.

Prerequisites

  • You installed the OpenShift CLI (oc).

  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the SR-IOV Network Operator.

  • You have enough available nodes in your cluster to handle the evicted workload from drained nodes.

  • You have not selected any control plane nodes for SR-IOV network device configuration.

Procedure

  1. Create an SriovNetworkNodePolicy object, and then save the YAML in the <name>-sriov-node-network.yaml file. Replace <name> with the name for this configuration.

    1. apiVersion: sriovnetwork.openshift.io/v1
    2. kind: SriovNetworkNodePolicy
    3. metadata:
    4. name: <name> (1)
    5. namespace: openshift-sriov-network-operator (2)
    6. spec:
    7. resourceName: <sriov_resource_name> (3)
    8. nodeSelector:
    9. feature.node.kubernetes.io/network-sriov.capable: "true" (4)
    10. priority: <priority> (5)
    11. mtu: <mtu> (6)
    12. numVfs: <num> (7)
    13. nicSelector: (8)
    14. vendor: "<vendor_code>" (9)
    15. deviceID: "<device_id>" (10)
    16. pfNames: ["<pf_name>", ...] (11)
    17. rootDevices: ["<pci_bus_id>", "..."] (12)
    18. deviceType: vfio-pci (13)
    19. isRdma: false (14)
    1Specify a name for the CR object.
    2Specify the namespace where the SR-IOV Operator is installed.
    3Specify the resource name of the SR-IOV device plugin. You can create multiple SriovNetworkNodePolicy objects for a resource name.
    4Specify the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes.
    5Optional: Specify an integer value between 0 and 99. A smaller number gets higher priority, so a priority of 10 is higher than a priority of 99. The default value is 99.
    6Optional: Specify a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models.
    7Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 128.
    8The nicSelector mapping selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters. It is recommended to identify the Ethernet adapter with enough precision to minimize the possibility of selecting an Ethernet device unintentionally. If you specify rootDevices, you must also specify a value for vendor, deviceID, or pfNames. If you specify both pfNames and rootDevices at the same time, ensure that they point to an identical device.
    9Optional: Specify the vendor hex code of the SR-IOV network device. The only allowed values are either 8086 or 15b3.
    10Optional: Specify the device hex code of SR-IOV network device. The only allowed values are 158b, 1015, 1017.
    11Optional: The parameter accepts an array of one or more physical function (PF) names for the Ethernet device.
    12The parameter accepts an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format: 0000:02:00.1.
    13The vfio-pci driver type is required for virtual functions in OKD Virtualization.
    14Optional: Specify whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set isRdma to false. The default value is false.

    If isRDMA flag is set to true, you can continue to use the RDMA enabled VF as a normal network device. A device can be used in either mode.

  2. Optional: Label the SR-IOV capable cluster nodes with SriovNetworkNodePolicy.Spec.NodeSelector if they are not already labeled. For more information about labeling nodes, see “Understanding how to update labels on nodes”.

  3. Create the SriovNetworkNodePolicy object:

    1. $ oc create -f <name>-sriov-node-network.yaml

    where <name> specifies the name for this configuration.

    After applying the configuration update, all the pods in sriov-network-operator namespace transition to the Running status.

  4. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured.

    1. $ oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'

Next steps

Enabling load balancer service creation by using the web console

You can enable the creation of load balancer services for a virtual machine (VM) by using the OKD web console.

Prerequisites

  • You have configured a load balancer for the cluster.

  • You are logged in as a user with the cluster-admin role.

Procedure

  1. Navigate to VirtualizationOverview.

  2. On the Settings tab, click Cluster.

  3. Expand General settings and SSH configuration.

  4. Set SSH over LoadBalancer service to on.