Using the default pod network for virtual machines

You can use the default pod network with OKD Virtualization. To do so, you must use the masquerade binding method. It is the only recommended binding method for use with the default pod network. Do not use masquerade mode with non-default networks.

For secondary networks, use the bridge binding method.

Configuring masquerade mode from the command line

You can use masquerade mode to hide a virtual machine’s outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge.

Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file.

Prerequisites

  • The virtual machine must be configured to use DHCP to acquire IPv4 addresses. The examples below are configured to use DHCP.

Procedure

  1. Edit the interfaces spec of your virtual machine configuration file:

    1. kind: VirtualMachine
    2. spec:
    3. domain:
    4. devices:
    5. interfaces:
    6. - name: red
    7. masquerade: {} (1)
    8. ports:
    9. - port: 80 (2)
    10. networks:
    11. - name: red
    12. pod: {}
    1Connect using masquerade mode
    2Allow incoming traffic on port 80
  2. Create the virtual machine:

    1. $ oc create -f <vm-name>.yaml

Configuring masquerade mode with dual-stack (IPv4 and IPv6)

You can configure a new virtual machine to use both IPv6 and IPv4 on the default pod network by using cloud-init.

The IPv6 network address must be statically set to fd10:0:2::2/120 with a default gateway of fd10:0:2::1 in the virtual machine configuration. These are used by the virt-launcher pod to route IPv6 traffic to the virtual machine and are not used externally.

When the virtual machine is running, incoming and outgoing traffic for the virtual machine is routed to both the IPv4 address and the unique IPv6 address of the virt-launcher pod. The virt-launcher pod then routes the IPv4 traffic to the DHCP address of the virtual machine, and the IPv6 traffic to the statically set IPv6 address of the virtual machine.

Prerequisites

  • The OKD cluster must use the OVN-Kubernetes Container Network Interface (CNI) network provider configured for dual-stack.

Procedure

  1. In a new virtual machine configuration, include an interface with masquerade and configure the IPv6 address and default gateway by using cloud-init.

    1. apiVersion: kubevirt.io/v1
    2. kind: VirtualMachine
    3. metadata:
    4. name: example-vm-ipv6
    5. ...
    6. interfaces:
    7. - name: red
    8. masquerade: {} (1)
    9. ports:
    10. - port: 80 (2)
    11. networks:
    12. - name: red
    13. pod: {}
    14. volumes:
    15. - cloudInitNoCloud:
    16. networkData: |
    17. version: 2
    18. ethernets:
    19. eth0:
    20. dhcp4: true
    21. addresses: [ fd10:0:2::2/120 ] (3)
    22. gateway6: fd10:0:2::1 (4)
    1Connect using masquerade mode.
    2Allows incoming traffic on port 80 to the virtual machine.
    3You must use the IPv6 address fd10:0:2::2/120.
    4You must use the gateway fd10:0:2::1.
  2. Create the virtual machine in the namespace:

    1. $ oc create -f example-vm-ipv6.yaml

Verification

  • To verify that IPv6 has been configured, start the virtual machine and view the interface status of the virtual machine instance to ensure it has an IPv6 address:
  1. $ oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}"

Selecting binding method

If you create a virtual machine from the OKD Virtualization web console wizard, select the required binding method from the Networking screen.

Networking fields

NameDescription

Name

Name for the network interface controller.

Model

Indicates the model of the network interface controller. Supported values are e1000e and virtio.

Network

List of available network attachment definitions.

Type

List of available binding methods. For the default pod network, masquerade is the only recommended binding method. For secondary networks, use the bridge binding method. The masquerade method is not supported for non-default networks. Select SR-IOV if you configured an SR-IOV network device and defined that network in the namespace.

MAC Address

MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically.

Virtual machine configuration examples for the default network

Template: Virtual machine configuration file

  1. apiVersion: kubevirt.io/v1
  2. kind: VirtualMachine
  3. metadata:
  4. name: example-vm
  5. namespace: default
  6. spec:
  7. running: false
  8. template:
  9. spec:
  10. domain:
  11. devices:
  12. disks:
  13. - name: containerdisk
  14. disk:
  15. bus: virtio
  16. - name: cloudinitdisk
  17. disk:
  18. bus: virtio
  19. interfaces:
  20. - masquerade: {}
  21. name: default
  22. resources:
  23. requests:
  24. memory: 1024M
  25. networks:
  26. - name: default
  27. pod: {}
  28. volumes:
  29. - name: containerdisk
  30. containerDisk:
  31. image: kubevirt/fedora-cloud-container-disk-demo
  32. - name: cloudinitdisk
  33. cloudInitNoCloud:
  34. userData: |
  35. #!/bin/bash
  36. echo "fedora" | passwd fedora --stdin

Template: Windows virtual machine configuration file

  1. apiVersion: kubevirt.io/v1
  2. kind: VirtualMachine
  3. metadata:
  4. labels:
  5. special: vm-windows
  6. name: vm-windows
  7. spec:
  8. template:
  9. metadata:
  10. labels:
  11. special: vm-windows
  12. spec:
  13. domain:
  14. clock:
  15. timer:
  16. hpet:
  17. present: false
  18. hyperv: {}
  19. pit:
  20. tickPolicy: delay
  21. rtc:
  22. tickPolicy: catchup
  23. utc: {}
  24. cpu:
  25. cores: 2
  26. devices:
  27. disks:
  28. - disk:
  29. bus: sata
  30. name: pvcdisk
  31. interfaces:
  32. - masquerade: {}
  33. model: e1000
  34. name: default
  35. features:
  36. acpi: {}
  37. apic: {}
  38. hyperv:
  39. relaxed: {}
  40. spinlocks:
  41. spinlocks: 8191
  42. vapic: {}
  43. firmware:
  44. uuid: 5d307ca9-b3ef-428c-8861-06e72d69f223
  45. machine:
  46. type: q35
  47. resources:
  48. requests:
  49. memory: 2Gi
  50. networks:
  51. - name: default
  52. pod: {}
  53. terminationGracePeriodSeconds: 0
  54. volumes:
  55. - name: pvcdisk
  56. persistentVolumeClaim:
  57. claimName: disk-windows

Creating a service from a virtual machine

Create a service from a running virtual machine by first creating a Service object to expose the virtual machine.

If IPv4 and IPv6 dual-stack networking is enabled for your cluster, you can create a service that uses IPv4, IPv6, or both, by defining the spec.ipFamilyPolicy and the spec.ipFamilies fields in the Service object.

The spec.ipFamilyPolicy field can be set to one of the following values:

  • SingleStack: The control plane assigns a cluster IP address for the service based on the first configured service cluster IP range.

  • PreferDualStack: The control plane assigns both IPv4 and IPv6 cluster IP addresses for the service on clusters that have dual-stack configured.

  • RequireDualStack: This option fails for clusters that do not have dual-stack networking enabled. For clusters that have dual-stack configured, the behavior is the same as when the value is set to PreferDualStack. The control plane allocates cluster IP addresses from both IPv4 and IPv6 address ranges.

You can define which IP family to use for single-stack or define the order of IP families for dual-stack by setting the spec.ipFamilies field to one of the following array values:

  • [IPv4]

  • [IPv6]

  • [IPv4, IPv6]

  • [IPv6, IPv4]

The ClusterIP service type exposes the virtual machine internally, within the cluster. The NodePort or LoadBalancer service types expose the virtual machine externally, outside of the cluster.

This procedure presents an example of how to create, connect to, and expose a Service object of type: ClusterIP as a virtual machine-backed service.

ClusterIP is the default service type, if the service type is not specified.

Procedure

  1. Edit the virtual machine YAML as follows:

    1. apiVersion: kubevirt.io/v1
    2. kind: VirtualMachine
    3. metadata:
    4. name: vm-ephemeral
    5. namespace: example-namespace
    6. spec:
    7. running: false
    8. template:
    9. metadata:
    10. labels:
    11. special: key (1)
    12. spec:
    13. domain:
    14. devices:
    15. disks:
    16. - name: containerdisk
    17. disk:
    18. bus: virtio
    19. - name: cloudinitdisk
    20. disk:
    21. bus: virtio
    22. interfaces:
    23. - masquerade: {}
    24. name: default
    25. resources:
    26. requests:
    27. memory: 1024M
    28. networks:
    29. - name: default
    30. pod: {}
    31. volumes:
    32. - name: containerdisk
    33. containerDisk:
    34. image: kubevirt/fedora-cloud-container-disk-demo
    35. - name: cloudinitdisk
    36. cloudInitNoCloud:
    37. userData: |
    38. #!/bin/bash
    39. echo "fedora" | passwd fedora --stdin
    1Add the label special: key in the spec.template.metadata.labels section.

    Labels on a virtual machine are passed through to the pod. The labels on the VirtualMachine configuration, for example special: key, must match the labels in the Service YAML selector attribute, which you create later in this procedure.

  2. Save the virtual machine YAML to apply your changes.

  3. Edit the Service YAML to configure the settings necessary to create and expose the Service object:

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. name: vmservice (1)
    5. namespace: example-namespace (2)
    6. spec:
    7. ports:
    8. - port: 27017
    9. protocol: TCP
    10. targetPort: 22 (3)
    11. selector:
    12. special: key (4)
    13. type: ClusterIP (5)
    1Specify the name of the service you are creating and exposing.
    2Specify namespace in the metadata section of the Service YAML that corresponds to the namespace you specify in the virtual machine YAML.
    3Add targetPort: 22, exposing the service on SSH port 22.
    4In the spec section of the Service YAML, add special: key to the selector attribute, which corresponds to the labels you added in the virtual machine YAML configuration file.
    5In the spec section of the Service YAML, add type: ClusterIP for a ClusterIP service. To create and expose other types of services externally, outside of the cluster, such as NodePort and LoadBalancer, replace type: ClusterIP with type: NodePort or type: LoadBalancer, as appropriate.
  4. Save the Service YAML to store the service configuration.

  5. Create the ClusterIP service:

    1. $ oc create -f <service_name>.yaml
  6. Start the virtual machine. If the virtual machine is already running, restart it.

  7. Query the Service object to verify it is available and is configured with type ClusterIP.

    Verification

    • Run the oc get service command, specifying the namespace that you reference in the virtual machine and Service YAML files.

      1. $ oc get service -n example-namespace

      Example output

      1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
      2. vmservice ClusterIP 172.30.3.149 <none> 27017/TCP 2m
      • As shown from the output, vmservice is running.

      • The TYPE displays as ClusterIP, as you specified in the Service YAML.

  1. Establish a connection to the virtual machine that you want to use to back your service. Connect from an object inside the cluster, such as another virtual machine.

    1. Edit the virtual machine YAML as follows:

      1. apiVersion: kubevirt.io/v1
      2. kind: VirtualMachine
      3. metadata:
      4. name: vm-connect
      5. namespace: example-namespace
      6. spec:
      7. running: false
      8. template:
      9. spec:
      10. domain:
      11. devices:
      12. disks:
      13. - name: containerdisk
      14. disk:
      15. bus: virtio
      16. - name: cloudinitdisk
      17. disk:
      18. bus: virtio
      19. interfaces:
      20. - masquerade: {}
      21. name: default
      22. resources:
      23. requests:
      24. memory: 1024M
      25. networks:
      26. - name: default
      27. pod: {}
      28. volumes:
      29. - name: containerdisk
      30. containerDisk:
      31. image: kubevirt/fedora-cloud-container-disk-demo
      32. - name: cloudinitdisk
      33. cloudInitNoCloud:
      34. userData: |
      35. #!/bin/bash
      36. echo "fedora" | passwd fedora --stdin
    2. Run the oc create command to create a second virtual machine, where file.yaml is the name of the virtual machine YAML:

      1. $ oc create -f <file.yaml>
    3. Start the virtual machine.

    4. Connect to the virtual machine by running the following virtctl command:

      1. $ virtctl -n example-namespace console <new-vm-name>

      For service type LoadBalancer, use the vinagre client to connect your virtual machine by using the public IP and port. External ports are dynamically allocated when using service type LoadBalancer.

    5. Run the ssh command to authenticate the connection, where 172.30.3.149 is the ClusterIP of the service and fedora is the user name of the virtual machine:

      1. $ ssh fedora@172.30.3.149 -p 27017

      Verification

      • You receive the command prompt of the virtual machine backing the service you want to expose. You now have a service backed by a running virtual machine.

Additional resources