Configuring PXE booting for virtual machines

PXE booting, or network booting, is available in OKD Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host.

Prerequisites

  • A Linux bridge must be connected.

  • The PXE server must be connected to the same VLAN as the bridge.

PXE booting with a specified MAC address

As an administrator, you can boot a client over the network by first creating a NetworkAttachmentDefinition object for your PXE network. Then, reference the network attachment definition in your virtual machine instance configuration file before you start the virtual machine instance. You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server.

Prerequisites

  • A Linux bridge must be connected.

  • The PXE server must be connected to the same VLAN as the bridge.

Procedure

  1. Configure a PXE network on the cluster:

    1. Create the network attachment definition file for PXE network pxe-net-conf:

      1. apiVersion: "k8s.cni.cncf.io/v1"
      2. kind: NetworkAttachmentDefinition
      3. metadata:
      4. name: pxe-net-conf
      5. spec:
      6. config: '{
      7. "cniVersion": "0.3.1",
      8. "name": "pxe-net-conf",
      9. "plugins": [
      10. {
      11. "type": "cnv-bridge",
      12. "bridge": "br1",
      13. "vlan": 1 (1)
      14. },
      15. {
      16. "type": "cnv-tuning" (2)
      17. }
      18. ]
      19. }'
      1Optional: The VLAN tag.
      2The cnv-tuning plugin provides support for custom MAC addresses.

      The virtual machine instance will be attached to the bridge br1 through an access port with the requested VLAN.

  2. Create the network attachment definition by using the file you created in the previous step:

    1. $ oc create -f pxe-net-conf.yaml
  3. Edit the virtual machine instance configuration file to include the details of the interface and network.

    1. Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically.

      Ensure that bootOrder is set to 1 so that the interface boots first. In this example, the interface is connected to a network called <pxe-net>:

      1. interfaces:
      2. - masquerade: {}
      3. name: default
      4. - bridge: {}
      5. name: pxe-net
      6. macAddress: de:00:00:00:00:de
      7. bootOrder: 1

      Boot order is global for interfaces and disks.

    2. Assign a boot device number to the disk to ensure proper booting after operating system provisioning.

      Set the disk bootOrder value to 2:

      1. devices:
      2. disks:
      3. - disk:
      4. bus: virtio
      5. name: containerdisk
      6. bootOrder: 2
    3. Specify that the network is connected to the previously created network attachment definition. In this scenario, <pxe-net> is connected to the network attachment definition called <pxe-net-conf>:

      1. networks:
      2. - name: default
      3. pod: {}
      4. - name: pxe-net
      5. multus:
      6. networkName: pxe-net-conf
  4. Create the virtual machine instance:

    1. $ oc create -f vmi-pxe-boot.yaml

Example output

  1. virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created
  1. Wait for the virtual machine instance to run:

    1. $ oc get vmi vmi-pxe-boot -o yaml | grep -i phase
    2. phase: Running
  2. View the virtual machine instance using VNC:

    1. $ virtctl vnc vmi-pxe-boot
  3. Watch the boot screen to verify that the PXE boot is successful.

  4. Log in to the virtual machine instance:

    1. $ virtctl console vmi-pxe-boot
  5. Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used eth1 for the PXE boot, without an IP address. The other interface, eth0, got an IP address from OKD.

    1. $ ip addr

Example output

  1. ...
  2. 3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
  3. link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff

OKD Virtualization networking glossary

The following terms are used throughout OKD Virtualization documentation:

Container Network Interface (CNI)

A Cloud Native Computing Foundation project, focused on container network connectivity. OKD Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality.

Multus

A “meta” CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.

Custom resource definition (CRD)

A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.

Network attachment definition (NAD)

A CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.

Node network configuration policy (NNCP)

A CRD introduced by the nmstate project, describing the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster.