Configuring an additional network

As a cluster administrator, you can configure an additional network for your cluster. The following network types are supported:

Approaches to managing an additional network

You can manage the life cycle of an additional network by two approaches. Each approach is mutually exclusive and you can only use one approach for managing an additional network at a time. For either approach, the additional network is managed by a Container Network Interface (CNI) plugin that you configure.

For an additional network, IP addresses are provisioned through an IP Address Management (IPAM) CNI plugin that you configure as part of the additional network. The IPAM plugin supports a variety of IP address assignment approaches including DHCP and static assignment.

  • Modify the Cluster Network Operator (CNO) configuration: The CNO automatically creates and manages the NetworkAttachmentDefinition object. In addition to managing the object lifecycle the CNO ensures a DHCP is available for an additional network that uses a DHCP assigned IP address.

  • Applying a YAML manifest: You can manage the additional network directly by creating an NetworkAttachmentDefinition object. This approach allows for the chaining of CNI plugins.

When deploying OKD nodes with multiple network interfaces on OpenStack with OVN SDN, DNS configuration of the secondary interface might take precedence over the DNS configuration of the primary interface. In this case, remove the DNS nameservers for the subnet id that is attached to the secondary interface:

  1. $ openstack subnet set dns-nameserver 0.0.0.0 <subnet_id>

Configuration for an additional network attachment

An additional network is configured by using the NetworkAttachmentDefinition API in the k8s.cni.cncf.io API group.

Do not store any sensitive information or a secret in the NetworkAttachmentDefinition object because this information is accessible by the project administration user.

The configuration for the API is described in the following table:

Table 1. NetworkAttachmentDefinition API fields
FieldTypeDescription

metadata.name

string

The name for the additional network.

metadata.namespace

string

The namespace that the object is associated with.

spec.config

string

The CNI plugin configuration in JSON format.

Configuration of an additional network through the Cluster Network Operator

The configuration for an additional network attachment is specified as part of the Cluster Network Operator (CNO) configuration.

The following YAML describes the configuration parameters for managing an additional network with the CNO:

Cluster Network Operator configuration

  1. apiVersion: operator.openshift.io/v1
  2. kind: Network
  3. metadata:
  4. name: cluster
  5. spec:
  6. # ...
  7. additionalNetworks: (1)
  8. - name: <name> (2)
  9. namespace: <namespace> (3)
  10. rawCNIConfig: |- (4)
  11. {
  12. ...
  13. }
  14. type: Raw
1An array of one or more additional network configurations.
2The name for the additional network attachment that you are creating. The name must be unique within the specified namespace.
3The namespace to create the network attachment in. If you do not specify a value, then the default namespace is used.
4A CNI plugin configuration in JSON format.

Configuration of an additional network from a YAML manifest

The configuration for an additional network is specified from a YAML configuration file, such as in the following example:

  1. apiVersion: k8s.cni.cncf.io/v1
  2. kind: NetworkAttachmentDefinition
  3. metadata:
  4. name: <name> (1)
  5. spec:
  6. config: |- (2)
  7. {
  8. ...
  9. }
1The name for the additional network attachment that you are creating.
2A CNI plugin configuration in JSON format.

Configurations for additional network types

The specific configuration fields for additional networks is described in the following sections.

Configuration for a bridge additional network

The following object describes the configuration parameters for the bridge CNI plugin:

Table 2. Bridge CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.3.1 value is required.

name

string

The value for the name parameter you provided previously for the CNO configuration.

type

string

The name of the CNI plugin to configure: bridge.

ipam

object

The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition.

bridge

string

Optional: Specify the name of the virtual bridge to use. If the bridge interface does not exist on the host, it is created. The default value is cni0.

ipMasq

boolean

Optional: Set to true to enable IP masquerading for traffic that leaves the virtual network. The source IP address for all traffic is rewritten to the bridge’s IP address. If the bridge does not have an IP address, this setting has no effect. The default value is false.

isGateway

boolean

Optional: Set to true to assign an IP address to the bridge. The default value is false.

isDefaultGateway

boolean

Optional: Set to true to configure the bridge as the default gateway for the virtual network. The default value is false. If isDefaultGateway is set to true, then isGateway is also set to true automatically.

forceAddress

boolean

Optional: Set to true to allow assignment of a previously assigned IP address to the virtual bridge. When set to false, if an IPv4 address or an IPv6 address from overlapping subsets is assigned to the virtual bridge, an error occurs. The default value is false.

hairpinMode

boolean

Optional: Set to true to allow the virtual bridge to send an Ethernet frame back through the virtual port it was received on. This mode is also known as reflective relay. The default value is false.

promiscMode

boolean

Optional: Set to true to enable promiscuous mode on the bridge. The default value is false.

vlan

string

Optional: Specify a virtual LAN (VLAN) tag as an integer value. By default, no VLAN tag is assigned.

preserveDefaultVlan

string

Optional: Indicates whether the default vlan must be preserved on the veth end connected to the bridge. Defaults to true.

vlanTrunk

list

Optional: Assign a VLAN trunk tag. The default value is none.

mtu

string

Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.

enabledad

boolean

Optional: Enables duplicate address detection for the container side veth. The default value is false.

macspoofchk

boolean

Optional: Enables mac spoof check, limiting the traffic originating from the container to the mac address of the interface. The default value is false.

The VLAN parameter configures the VLAN tag on the host end of the veth and also enables the vlan_filtering feature on the bridge interface.

To configure uplink for a L2 network you need to allow the vlan on the uplink interface by using the following command:

  1. $ bridge vlan add vid VLAN_ID dev DEV

bridge configuration example

The following example configures an additional network named bridge-net:

  1. {
  2. "cniVersion": "0.3.1",
  3. "name": "bridge-net",
  4. "type": "bridge",
  5. "isGateway": true,
  6. "vlan": 2,
  7. "ipam": {
  8. "type": "dhcp"
  9. }
  10. }

Configuration for a host device additional network

Specify your network device by setting only one of the following parameters: device,hwaddr, kernelpath, or pciBusID.

The following object describes the configuration parameters for the host-device CNI plugin:

Table 3. Host device CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.3.1 value is required.

name

string

The value for the name parameter you provided previously for the CNO configuration.

type

string

The name of the CNI plugin to configure: host-device.

device

string

Optional: The name of the device, such as eth0.

hwaddr

string

Optional: The device hardware MAC address.

kernelpath

string

Optional: The Linux kernel device path, such as /sys/devices/pci0000:00/0000:00:1f.6.

pciBusID

string

Optional: The PCI address of the network device, such as 0000:00:1f.6.

host-device configuration example

The following example configures an additional network named hostdev-net:

  1. {
  2. "cniVersion": "0.3.1",
  3. "name": "hostdev-net",
  4. "type": "host-device",
  5. "device": "eth1"
  6. }

Configuration for an VLAN additional network

The following object describes the configuration parameters for the VLAN CNI plugin:

Table 4. VLAN CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.3.1 value is required.

name

string

The value for the name parameter you provided previously for the CNO configuration.

type

string

The name of the CNI plugin to configure: vlan.

master

string

The Ethernet interface to associate with the network attachment. If a master is not specified, the interface for the default network route is used.

vlanId

integer

Set the id of the vlan.

ipam

object

The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition.

mtu

integer

Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.

dns

integer

Optional: DNS information to return, for example, a priority-ordered list of DNS nameservers.

linkInContainer

boolean

Optional: Specifies whether the master interface is in the container network namespace or the main network namespace. Set the value to true to request the use of a container namespace master interface.

vlan configuration example

The following example configures an additional network named vlan-net:

  1. {
  2. "name": "vlan-net",
  3. "cniVersion": "0.3.1",
  4. "type": "vlan",
  5. "master": "eth0",
  6. "mtu": 1500,
  7. "vlanId": 5,
  8. "linkInContainer": false,
  9. "ipam": {
  10. "type": "host-local",
  11. "subnet": "10.1.1.0/24"
  12. },
  13. "dns": {
  14. "nameservers": [ "10.1.1.1", "8.8.8.8" ]
  15. }
  16. }

Configuration for an IPVLAN additional network

The following object describes the configuration parameters for the IPVLAN CNI plugin:

Table 5. IPVLAN CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.3.1 value is required.

name

string

The value for the name parameter you provided previously for the CNO configuration.

type

string

The name of the CNI plugin to configure: ipvlan.

ipam

object

The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. This is required unless the plugin is chained.

mode

string

Optional: The operating mode for the virtual network. The value must be l2, l3, or l3s. The default value is l2.

master

string

Optional: The Ethernet interface to associate with the network attachment. If a master is not specified, the interface for the default network route is used.

mtu

integer

Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.

linkInContainer

boolean

Optional: Specifies whether the master interface is in the container network namespace or the main network namespace. Set the value to true to request the use of a container namespace master interface.

  • The ipvlan object does not allow virtual interfaces to communicate with the master interface. Therefore the container will not be able to reach the host by using the ipvlan interface. Be sure that the container joins a network that provides connectivity to the host, such as a network supporting the Precision Time Protocol (PTP).

  • A single master interface cannot simultaneously be configured to use both macvlan and ipvlan.

  • For IP allocation schemes that cannot be interface agnostic, the ipvlan plugin can be chained with an earlier plugin that handles this logic. If the master is omitted, then the previous result must contain a single interface name for the ipvlan plugin to enslave. If ipam is omitted, then the previous result is used to configure the ipvlan interface.

ipvlan configuration example

The following example configures an additional network named ipvlan-net:

  1. {
  2. "cniVersion": "0.3.1",
  3. "name": "ipvlan-net",
  4. "type": "ipvlan",
  5. "master": "eth1",
  6. "linkInContainer": false,
  7. "mode": "l3",
  8. "ipam": {
  9. "type": "static",
  10. "addresses": [
  11. {
  12. "address": "192.168.10.10/24"
  13. }
  14. ]
  15. }
  16. }

Configuration for a MACVLAN additional network

The following object describes the configuration parameters for the macvlan CNI plugin:

Table 6. MACVLAN CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.3.1 value is required.

name

string

The value for the name parameter you provided previously for the CNO configuration.

type

string

The name of the CNI plugin to configure: macvlan.

ipam

object

The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition.

mode

string

Optional: Configures traffic visibility on the virtual network. Must be either bridge, passthru, private, or vepa. If a value is not provided, the default value is bridge.

master

string

Optional: The host network interface to associate with the newly created macvlan interface. If a value is not specified, then the default route interface is used.

mtu

string

Optional: The maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.

linkInContainer

boolean

Optional: Specifies whether the master interface is in the container network namespace or the main network namespace. Set the value to true to request the use of a container namespace master interface.

If you specify the master key for the plugin configuration, use a different physical network interface than the one that is associated with your primary network plugin to avoid possible conflicts.

macvlan configuration example

The following example configures an additional network named macvlan-net:

  1. {
  2. "cniVersion": "0.3.1",
  3. "name": "macvlan-net",
  4. "type": "macvlan",
  5. "master": "eth1",
  6. "linkInContainer": false,
  7. "mode": "bridge",
  8. "ipam": {
  9. "type": "dhcp"
  10. }
  11. }

Configuration for a TAP additional network

The following object describes the configuration parameters for the TAP CNI plugin:

Table 7. TAP CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.3.1 value is required.

name

string

The value for the name parameter you provided previously for the CNO configuration.

type

string

The name of the CNI plugin to configure: tap.

mac

string

Optional: Request the specified MAC address for the interface.

mtu

integer

Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.

selinuxcontext

string

Optional: The SELinux context to associate with the tap device.

The value system_u:system_r:container_t:s0 is required for OKD.

multiQueue

boolean

Optional: Set to true to enable multi-queue.

owner

integer

Optional: The user owning the tap device.

group

integer

Optional: The group owning the tap device.

bridge

string

Optional: Set the tap device as a port of an already existing bridge.

Tap configuration example

The following example configures an additional network named mynet:

  1. {
  2. "name": "mynet",
  3. "cniVersion": "0.3.1",
  4. "type": "tap",
  5. "mac": "00:11:22:33:44:55",
  6. "mtu": 1500,
  7. "selinuxcontext": "system_u:system_r:container_t:s0",
  8. "multiQueue": true,
  9. "owner": 0,
  10. "group": 0
  11. "bridge": "br1"
  12. }

Setting SELinux boolean for the TAP CNI plugin

To create the tap device with the container_t SELinux context, enable the container_use_devices boolean on the host by using the Machine Config Operator (MCO).

Prerequisites

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create a new YAML file named, such as setsebool-container-use-devices.yaml, with the following details:

    1. apiVersion: machineconfiguration.openshift.io/v1
    2. kind: MachineConfig
    3. metadata:
    4. labels:
    5. machineconfiguration.openshift.io/role: worker
    6. name: 99-worker-setsebool
    7. spec:
    8. config:
    9. ignition:
    10. version: 3.2.0
    11. systemd:
    12. units:
    13. - enabled: true
    14. name: setsebool.service
    15. contents: |
    16. [Unit]
    17. Description=Set SELinux boolean for the TAP CNI plugin
    18. Before=kubelet.service
    19. [Service]
    20. Type=oneshot
    21. ExecStart=/usr/sbin/setsebool container_use_devices=on
    22. RemainAfterExit=true
    23. [Install]
    24. WantedBy=multi-user.target graphical.target
  2. Create the new MachineConfig object by running the following command:

    1. $ oc apply -f setsebool-container-use-devices.yaml

    Applying any changes to the MachineConfig object causes all affected nodes to gracefully reboot after the change is applied. This update can take some time to be applied.

  3. Verify the change is applied by running the following command:

    1. $ oc get machineconfigpools

    Expected output

    1. NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
    2. master rendered-master-e5e0c8e8be9194e7c5a882e047379cfa True False False 3 3 3 0 7d2h
    3. worker rendered-worker-d6c9ca107fba6cd76cdcbfcedcafa0f2 True False False 3 3 3 0 7d

    All nodes should be in the updated and ready state.

Additional resources

Configuration for an OVN-Kubernetes additional network

The Red Hat OpenShift Networking OVN-Kubernetes network plugin allows the configuration of secondary network interfaces for pods. To configure secondary network interfaces, you must define the configurations in the NetworkAttachmentDefinition custom resource (CR).

Pod and multi-network policy creation might remain in a pending state until the OVN-Kubernetes control plane agent in the nodes processes the associated network-attachment-definition CR.

You can configure an OVN-Kubernetes additional network in either layer 2 or localnet topologies.

  • A layer 2 topology supports east-west cluster traffic, but does not allow access to the underlying physical network.

  • A localnet topology allows connections to the physical network, but requires additional configuration of the underlying Open vSwitch (OVS) bridge on cluster nodes.

The following sections provide example configurations for each of the topologies that OVN-Kubernetes currently allows for secondary networks.

Networks names must be unique. For example, creating multiple NetworkAttachmentDefinition CRs with different configurations that reference the same network is unsupported.

Supported platforms for OVN-Kubernetes additional network

You can use an OVN-Kubernetes additional network with the following supported platforms:

  • Bare metal

  • IBM Power®

  • IBM Z®

  • IBM® LinuxONE

  • VMware vSphere

  • OpenStack

OVN-Kubernetes network plugin JSON configuration table

The following table describes the configuration parameters for the OVN-Kubernetes CNI network plugin:

Table 8. OVN-Kubernetes network plugin JSON configuration table
FieldTypeDescription

cniVersion

string

The CNI specification version. The required value is 0.3.1.

name

string

The name of the network. These networks are not namespaced. For example, you can have a network named l2-network referenced from two different NetworkAttachmentDefinitions that exist on two different namespaces. This ensures that pods making use of the NetworkAttachmentDefinition on their own different namespaces can communicate over the same secondary network. However, those two different NetworkAttachmentDefinitions must also share the same network specific parameters such as topology, subnets, mtu, and excludeSubnets.

type

string

The name of the CNI plugin to configure. This value must be set to ovn-k8s-cni-overlay.

topology

string

The topological configuration for the network. Must be one of layer2 or localnet.

subnets

string

The subnet to use for the network across the cluster. When specifying layer2 for the topology, only include the CIDR for the node. For example, 10.100.200.0/24.

For “topology”:”layer2” deployments, IPv6 (2001:DBB::/64) and dual-stack (192.168.100.0/24,2001:DBB::/64) subnets are supported.

When omitted, the logical switch implementing the network only provides layer 2 communication, and users must configure IP addresses for the pods. Port security only prevents MAC spoofing.

mtu

string

The maximum transmission unit (MTU). The default value, 1300, is automatically set by the kernel.

netAttachDefName

string

The metadata namespace and name of the network attachment definition object where this configuration is included. For example, if this configuration is defined in a NetworkAttachmentDefinition in namespace ns1 named l2-network, this should be set to ns1/l2-network.

excludeSubnets

string

A comma-separated list of CIDRs and IP addresses. IP addresses are removed from the assignable IP address pool and are never passed to the pods.

vlanID

integer

If topology is set to localnet, the specified VLAN tag is assigned to traffic from this additional network. The default is to not assign a VLAN tag.

Compatibility with multi-network policy

The multi-network policy API, which is provided by the MultiNetworkPolicy custom resource definition (CRD) in the k8s.cni.cncf.io API group, is compatible with an OVN-Kubernetes secondary network. When defining a network policy, the network policy rules that can be used depend on whether the OVN-Kubernetes secondary network defines the subnets field. Refer to the following table for details:

Table 9. Supported multi-network policy selectors based on subnets CNI configuration
subnets field specifiedAllowed multi-network policy selectors

Yes

  • podSelector and namespaceSelector

  • ipBlock

No

  • ipBlock

For example, the following multi-network policy is valid only if the subnets field is defined in the additional network CNI configuration for the additional network named blue2:

Example multi-network policy that uses a pod selector

  1. apiVersion: k8s.cni.cncf.io/v1beta1
  2. kind: MultiNetworkPolicy
  3. metadata:
  4. name: allow-same-namespace
  5. annotations:
  6. k8s.v1.cni.cncf.io/policy-for: blue2
  7. spec:
  8. podSelector:
  9. ingress:
  10. - from:
  11. - podSelector: {}

The following example uses the ipBlock network policy selector, which is always valid for an OVN-Kubernetes additional network:

Example multi-network policy that uses an IP block selector

  1. apiVersion: k8s.cni.cncf.io/v1beta1
  2. kind: MultiNetworkPolicy
  3. metadata:
  4. name: ingress-ipblock
  5. annotations:
  6. k8s.v1.cni.cncf.io/policy-for: default/flatl2net
  7. spec:
  8. podSelector:
  9. matchLabels:
  10. name: access-control
  11. policyTypes:
  12. - Ingress
  13. ingress:
  14. - from:
  15. - ipBlock:
  16. cidr: 10.200.0.0/30

Configuration for a layer 2 switched topology

The switched (layer 2) topology networks interconnect the workloads through a cluster-wide logical switch. This configuration can be used for IPv6 and dual-stack deployments.

Layer 2 switched topology networks only allow for the transfer of data packets between pods within a cluster.

The following JSON example configures a switched secondary network:

  1. {
  2. "cniVersion": "0.3.1",
  3. "name": "l2-network",
  4. "type": "ovn-k8s-cni-overlay",
  5. "topology":"layer2",
  6. "subnets": "10.100.200.0/24",
  7. "mtu": 1300,
  8. "netAttachDefName": "ns1/l2-network",
  9. "excludeSubnets": "10.100.200.0/29"
  10. }

Configuration for a localnet topology

The switched (localnet) topology interconnects the workloads through a cluster-wide logical switch to a physical network.

Prerequisites for configuring OVN-Kubernetes additional network
Configuration for an OVN-Kubernetes additional network mapping

You must map an additional network to the OVN bridge to use it as an OVN-Kubernetes additional network. Bridge mappings allow network traffic to reach the physical network. A bridge mapping associates a physical network name, also known as an interface label, to a bridge created with Open vSwitch (OVS).

You can create an NodeNetworkConfigurationPolicy object, part of the nmstate.io/v1 API group, to declaratively create the mapping. This API is provided by the NMState Operator. By using this API you can apply the bridge mapping to nodes that match your specified nodeSelector expression, such as node-role.kubernetes.io/worker: ''.

When attaching an additional network, you can either use the existing br-ex bridge or create a new bridge. Which approach to use depends on your specific network infrastructure.

  • If your nodes include only a single network interface, you must use the existing bridge. This network interface is owned and managed by OVN-Kubernetes and you must not remove it from the br-ex bridge or alter the interface configuration. If you remove or alter the network interface, your cluster network will stop working correctly.

  • If your nodes include several network interfaces, you can attach a different network interface to a new bridge, and use that for your additional network. This approach provides for traffic isolation from your primary cluster network.

The localnet1 network is mapped to the br-ex bridge in the following example:

Example mapping for sharing a bridge

  1. apiVersion: nmstate.io/v1
  2. kind: NodeNetworkConfigurationPolicy
  3. metadata:
  4. name: mapping (1)
  5. spec:
  6. nodeSelector:
  7. node-role.kubernetes.io/worker: '' (2)
  8. desiredState:
  9. ovn:
  10. bridge-mappings:
  11. - localnet: localnet1 (3)
  12. bridge: br-ex (4)
  13. state: present (5)
1The name for the configuration object.
2A node selector that specifies the nodes to apply the node network configuration policy to.
3The name for the additional network from which traffic is forwarded to the OVS bridge. This additional network must match the name of the spec.config.name field of the NetworkAttachmentDefinition object that defines the OVN-Kubernetes additional network.
4The name of the OVS bridge on the node. This value is required only if you specify state: present.
5The state for the mapping. Must be either present to add the bridge or absent to remove the bridge. The default value is present.

In the following example, the localnet2 network interface is attached to the ovs-br1 bridge. Through this attachment, the network interface is available to the OVN-Kubernetes network plugin as an additional network.

Example mapping for nodes with multiple interfaces

  1. apiVersion: nmstate.io/v1
  2. kind: NodeNetworkConfigurationPolicy
  3. metadata:
  4. name: ovs-br1-multiple-networks (1)
  5. spec:
  6. nodeSelector:
  7. node-role.kubernetes.io/worker: '' (2)
  8. desiredState:
  9. interfaces:
  10. - name: ovs-br1 (3)
  11. description: |-
  12. A dedicated OVS bridge with eth1 as a port
  13. allowing all VLANs and untagged traffic
  14. type: ovs-bridge
  15. state: up
  16. bridge:
  17. options:
  18. stp: true
  19. port:
  20. - name: eth1 (4)
  21. ovn:
  22. bridge-mappings:
  23. - localnet: localnet2 (5)
  24. bridge: ovs-br1 (6)
  25. state: present (7)
1The name for the configuration object.
2A node selector that specifies the nodes to apply the node network configuration policy to.
3A new OVS bridge, separate from the default bridge used by OVN-Kubernetes for all cluster traffic.
4A network device on the host system to associate with this new OVS bridge.
5The name for the additional network from which traffic is forwarded to the OVS bridge. This additional network must match the name of the spec.config.name field of the NetworkAttachmentDefinition object that defines the OVN-Kubernetes additional network.
6The name of the OVS bridge on the node. This value is required only if you specify state: present.
7The state for the mapping. Must be either present to add the bridge or absent to remove the bridge. The default value is present.

This declarative approach is recommended because the NMState Operator applies additional network configuration to all nodes specified by the node selector automatically and transparently.

The following JSON example configures a localnet secondary network:

  1. {
  2. "cniVersion": "0.3.1",
  3. "name": "ns1-localnet-network",
  4. "type": "ovn-k8s-cni-overlay",
  5. "topology":"localnet",
  6. "subnets": "202.10.130.112/28",
  7. "vlanID": 33,
  8. "mtu": 1500,
  9. "netAttachDefName": "ns1/localnet-network"
  10. "excludeSubnets": "10.100.200.0/29"
  11. }

Configuring pods for additional networks

You must specify the secondary network attachments through the k8s.v1.cni.cncf.io/networks annotation.

The following example provisions a pod with two secondary attachments, one for each of the attachment configurations presented in this guide.

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. annotations:
  5. k8s.v1.cni.cncf.io/networks: l2-network
  6. name: tinypod
  7. namespace: ns1
  8. spec:
  9. containers:
  10. - args:
  11. - pause
  12. image: k8s.gcr.io/e2e-test-images/agnhost:2.36
  13. imagePullPolicy: IfNotPresent
  14. name: agnhost-container

Configuring pods with a static IP address

The following example provisions a pod with a static IP address.

  • You can only specify the IP address for a pod’s secondary network attachment for layer 2 attachments.

  • Specifying a static IP address for the pod is only possible when the attachment configuration does not feature subnets.

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. annotations:
  5. k8s.v1.cni.cncf.io/networks: '[
  6. {
  7. "name": "l2-network", (1)
  8. "mac": "02:03:04:05:06:07", (2)
  9. "interface": "myiface1", (3)
  10. "ips": [
  11. "192.0.2.20/24"
  12. ] (4)
  13. }
  14. ]'
  15. name: tinypod
  16. namespace: ns1
  17. spec:
  18. containers:
  19. - args:
  20. - pause
  21. image: k8s.gcr.io/e2e-test-images/agnhost:2.36
  22. imagePullPolicy: IfNotPresent
  23. name: agnhost-container
1The name of the network. This value must be unique across all NetworkAttachmentDefinitions.
2The MAC address to be assigned for the interface.
3The name of the network interface to be created for the pod.
4The IP addresses to be assigned to the network interface.

Configuration of IP address assignment for an additional network

The IP address management (IPAM) Container Network Interface (CNI) plugin provides IP addresses for other CNI plugins.

You can use the following IP address assignment types:

  • Static assignment.

  • Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network.

  • Dynamic assignment through the Whereabouts IPAM CNI plugin.

Static IP address assignment configuration

The following table describes the configuration for static IP address assignment:

Table 10. ipam static configuration object
FieldTypeDescription

type

string

The IPAM address type. The value static is required.

addresses

array

An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported.

routes

array

An array of objects specifying routes to configure inside the pod.

dns

array

Optional: An array of objects specifying the DNS configuration.

The addresses array requires objects with the following fields:

Table 11. ipam.addresses[] array
FieldTypeDescription

address

string

An IP address and network prefix that you specify. For example, if you specify 10.10.21.10/24, then the additional network is assigned an IP address of 10.10.21.10 and the netmask is 255.255.255.0.

gateway

string

The default gateway to route egress network traffic to.

Table 12. ipam.routes[] array
FieldTypeDescription

dst

string

The IP address range in CIDR format, such as 192.168.17.0/24 or 0.0.0.0/0 for the default route.

gw

string

The gateway where network traffic is routed.

Table 13. ipam.dns object
FieldTypeDescription

nameservers

array

An array of one or more IP addresses for to send DNS queries to.

domain

array

The default domain to append to a hostname. For example, if the domain is set to example.com, a DNS lookup query for example-host is rewritten as example-host.example.com.

search

array

An array of domain names to append to an unqualified hostname, such as example-host, during a DNS lookup query.

Static IP address assignment configuration example

  1. {
  2. "ipam": {
  3. "type": "static",
  4. "addresses": [
  5. {
  6. "address": "191.168.1.7/24"
  7. }
  8. ]
  9. }
  10. }

Dynamic IP address (DHCP) assignment configuration

The following JSON describes the configuration for dynamic IP address address assignment with DHCP.

Renewal of DHCP leases

A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster.

To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example:

Example shim network attachment definition
  1. apiVersion: operator.openshift.io/v1
  2. kind: Network
  3. metadata:
  4. name: cluster
  5. spec:
  6. additionalNetworks:
  7. - name: dhcp-shim
  8. namespace: default
  9. type: Raw
  10. rawCNIConfig: |-
  11. {
  12. name”: dhcp-shim”,
  13. cniVersion”: 0.3.1”,
  14. type”: bridge”,
  15. ipam”: {
  16. type”: dhcp
  17. }
  18. }
  19. # …
Table 14. ipam DHCP configuration object
FieldTypeDescription

type

string

The IPAM address type. The value dhcp is required.

Dynamic IP address (DHCP) assignment configuration example

  1. {
  2. "ipam": {
  3. "type": "dhcp"
  4. }
  5. }

Dynamic IP address assignment configuration with Whereabouts

The Whereabouts CNI plugin allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server.

The following table describes the configuration for dynamic IP address assignment with Whereabouts:

Table 15. ipam whereabouts configuration object
FieldTypeDescription

type

string

The IPAM address type. The value whereabouts is required.

range

string

An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses.

exclude

array

Optional: A list of zero or more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned.

Dynamic IP address assignment configuration example that uses Whereabouts

  1. {
  2. "ipam": {
  3. "type": "whereabouts",
  4. "range": "192.0.2.192/27",
  5. "exclude": [
  6. "192.0.2.192/30",
  7. "192.0.2.196/32"
  8. ]
  9. }
  10. }

Creating a Whereabouts reconciler daemon set

The Whereabouts reconciler is responsible for managing dynamic IP address assignments for the pods within a cluster using the Whereabouts IP Address Management (IPAM) solution. It ensures that each pods gets a unique IP address from the specified IP address range. It also handles IP address releases when pods are deleted or scaled down.

You can also use a NetworkAttachmentDefinition custom resource for dynamic IP address assignment.

The Whereabouts reconciler daemon set is automatically created when you configure an additional network through the Cluster Network Operator. It is not automatically created when you configure an additional network from a YAML manifest.

To trigger the deployment of the Whereabouts reconciler daemonset, you must manually create a whereabouts-shim network attachment by editing the Cluster Network Operator custom resource file.

Use the following procedure to deploy the Whereabouts reconciler daemonset.

Procedure

  1. Edit the Network.operator.openshift.io custom resource (CR) by running the following command:

    1. $ oc edit network.operator.openshift.io cluster
  2. Modify the additionalNetworks parameter in the CR to add the whereabouts-shim network attachment definition. For example:

    1. apiVersion: operator.openshift.io/v1
    2. kind: Network
    3. metadata:
    4. name: cluster
    5. spec:
    6. additionalNetworks:
    7. - name: whereabouts-shim
    8. namespace: default
    9. rawCNIConfig: |-
    10. {
    11. "name": "whereabouts-shim",
    12. "cniVersion": "0.3.1",
    13. "type": "bridge",
    14. "ipam": {
    15. "type": "whereabouts"
    16. }
    17. }
    18. type: Raw
  3. Save the file and exit the text editor.

  4. Verify that the whereabouts-reconciler daemon set deployed successfully by running the following command:

    1. $ oc get all -n openshift-multus | grep whereabouts-reconciler

    Example output

    1. pod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s
    2. pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s
    3. pod/whereabouts-reconciler-k86t9 1/1 Running 0 6s
    4. pod/whereabouts-reconciler-p4sxw 1/1 Running 0 6s
    5. pod/whereabouts-reconciler-rvfdv 1/1 Running 0 6s
    6. pod/whereabouts-reconciler-svzw9 1/1 Running 0 6s
    7. daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6s

Creating a configuration for assignment of dual-stack IP addresses dynamically

Dual-stack IP address assignment can be configured with the ipRanges parameter for:

  • IPv4 addresses

  • IPv6 addresses

  • multiple IP address assignment

Procedure

  1. Set type to whereabouts.

  2. Use ipRanges to allocate IP addresses as shown in the following example:

    1. cniVersion: operator.openshift.io/v1
    2. kind: Network
    3. =metadata:
    4. name: cluster
    5. spec:
    6. additionalNetworks:
    7. - name: whereabouts-shim
    8. namespace: default
    9. type: Raw
    10. rawCNIConfig: |-
    11. {
    12. "name": "whereabouts-dual-stack",
    13. "cniVersion": "0.3.1,
    14. "type": "bridge",
    15. "ipam": {
    16. "type": "whereabouts",
    17. "ipRanges": [
    18. {"range": "192.168.10.0/24"},
    19. {"range": "2001:db8::/64"}
    20. ]
    21. }
    22. }
  3. Attach network to a pod. For more information, see “Adding a pod to an additional network”.

  4. Verify that all IP addresses are assigned.

  5. Run the following command to ensure the IP addresses are assigned as metadata.

    1. $ oc exec -it mypod -- ip a

Additional resources

Creating an additional network attachment with the Cluster Network Operator

The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition object automatically.

Do not edit the NetworkAttachmentDefinition objects that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network.

Prerequisites

  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

Procedure

  1. Optional: Create the namespace for the additional networks:

    1. $ oc create namespace <namespace_name>
  2. To edit the CNO configuration, enter the following command:

    1. $ oc edit networks.operator.openshift.io cluster
  3. Modify the CR that you are creating by adding the configuration for the additional network that you are creating, as in the following example CR.

    1. apiVersion: operator.openshift.io/v1
    2. kind: Network
    3. metadata:
    4. name: cluster
    5. spec:
    6. # ...
    7. additionalNetworks:
    8. - name: tertiary-net
    9. namespace: namespace2
    10. type: Raw
    11. rawCNIConfig: |-
    12. {
    13. "cniVersion": "0.3.1",
    14. "name": "tertiary-net",
    15. "type": "ipvlan",
    16. "master": "eth1",
    17. "mode": "l2",
    18. "ipam": {
    19. "type": "static",
    20. "addresses": [
    21. {
    22. "address": "192.168.1.23/24"
    23. }
    24. ]
    25. }
    26. }
  4. Save your changes and quit the text editor to commit your changes.

Verification

  • Confirm that the CNO created the NetworkAttachmentDefinition object by running the following command. There might be a delay before the CNO creates the object.

    1. $ oc get network-attachment-definitions -n <namespace>

    where:

    <namespace>

    Specifies the namespace for the network attachment that you added to the CNO configuration.

    Example output

    1. NAME AGE
    2. test-network-1 14m

Creating an additional network attachment by applying a YAML manifest

Prerequisites

  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

Procedure

  1. Create a YAML file with your additional network configuration, such as in the following example:

    1. apiVersion: k8s.cni.cncf.io/v1
    2. kind: NetworkAttachmentDefinition
    3. metadata:
    4. name: next-net
    5. spec:
    6. config: |-
    7. {
    8. "cniVersion": "0.3.1",
    9. "name": "work-network",
    10. "type": "host-device",
    11. "device": "eth1",
    12. "ipam": {
    13. "type": "dhcp"
    14. }
    15. }
  2. To create the additional network, enter the following command:

    1. $ oc apply -f <file>.yaml

    where:

    <file>

    Specifies the name of the file contained the YAML manifest.

About configuring the master interface in the container network namespace

In OKD 4.14 and later, the ability to allow users to create a MAC-VLAN, IP-VLAN, and VLAN subinterface based on a master interface in a container namespace is now generally available.

This feature allows you to create the master interfaces as part of the pod network configuration in a separate network attachment definition. You can then base the VLAN, MACVLAN, or IPVLAN on this interface without requiring the knowledge of the network configuration of the node.

To ensure the use of a container namespace master interface, specify the linkInContainer and set the value to true in the VLAN, MACVLAN, or IPVLAN plugin configuration depending on the particular type of additional network.

Creating multiple VLANs on SR-IOV VFs

An example use case for utilizing this feature is to create multiple VLANs based on SR-IOV VFs. To do so, begin by creating an SR-IOV network and then define the network attachments for the VLAN interfaces.

The following example shows how to configure the setup illustrated in this diagram.

Creating VLANs

Figure 1. Creating VLANs

Prerequisites

  • You installed the OpenShift CLI (oc).

  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the SR-IOV Network Operator.

Procedure

  1. Create a dedicated container namespace where you want to deploy your pod by using the following command:

    1. $ oc new-project test-namespace
  2. Create an SR-IOV node policy:

    1. Create an SriovNetworkNodePolicy object, and then save the YAML in the sriov-node-network-policy.yaml file:

      1. apiVersion: sriovnetwork.openshift.io/v1
      2. kind: SriovNetworkNodePolicy
      3. metadata:
      4. name: sriovnic
      5. namespace: openshift-sriov-network-operator
      6. spec:
      7. deviceType: netdevice
      8. isRdma: false
      9. needVhostNet: true
      10. nicSelector:
      11. vendor: "15b3" (1)
      12. deviceID: "101b" (2)
      13. rootDevices: ["00:05.0"]
      14. numVfs: 10
      15. priority: 99
      16. resourceName: sriovnic
      17. nodeSelector:
      18. feature.node.kubernetes.io/network-sriov.capable: "true"

      The SR-IOV network node policy configuration example, with the setting deviceType: netdevice, is tailored specifically for Mellanox Network Interface Cards (NICs).

      1The vendor hexadecimal code of the SR-IOV network device. The value 15b3 is associated with a Mellanox NIC.
      2The device hexadecimal code of the SR-IOV network device.
    2. Apply the YAML by running the following command:

      1. $ oc apply -f sriov-node-network-policy.yaml

      Applying this might take some time due to the node requiring a reboot.

  3. Create an SR-IOV network:

    1. Create the SriovNetwork custom resource (CR) for the additional SR-IOV network attachment as in the following example CR. Save the YAML as the file sriov-network-attachment.yaml:

      1. apiVersion: sriovnetwork.openshift.io/v1
      2. kind: SriovNetwork
      3. metadata:
      4. name: sriov-network
      5. namespace: openshift-sriov-network-operator
      6. spec:
      7. networkNamespace: test-namespace
      8. resourceName: sriovnic
      9. spoofChk: "off"
      10. trust: "on"
    2. Apply the YAML by running the following command:

      1. $ oc apply -f sriov-network-attachment.yaml
  4. Create the VLAN additional network:

    1. Using the following YAML example, create a file named ipvlan100-additional-network-configuration.yaml:

      1. apiVersion: k8s.cni.cncf.io/v1
      2. kind: NetworkAttachmentDefinition
      3. metadata:
      4. name: vlan-100
      5. namespace: test-namespace
      6. spec:
      7. config: |
      8. {
      9. "cniVersion": "0.4.0",
      10. "name": "vlan-100",
      11. "plugins": [
      12. {
      13. "type": "vlan",
      14. "master": "ext0", (1)
      15. "mtu": 1500,
      16. "vlanId": 100,
      17. "linkInContainer": true, (2)
      18. "ipam": {"type": "whereabouts", "ipRanges": [{"range": "1.1.1.0/24"}]}
      19. }
      20. ]
      21. }
      1The VLAN configuration needs to specify the master name. This can be configured in the pod networks annotation.
      2The linkInContainer parameter must be specified.
    2. Apply the YAML file by running the following command:

      1. $ oc apply -f vlan100-additional-network-configuration.yaml
  5. Create a pod definition by using the earlier specified networks:

    1. Using the following YAML example, create a file named pod-a.yaml file:

      The manifest below includes 2 resources:

      • Namespace with security labels

      • Pod definition with appropriate network annotation

      1. apiVersion: v1
      2. kind: Namespace
      3. metadata:
      4. name: test-namespace
      5. labels:
      6. pod-security.kubernetes.io/enforce: privileged
      7. pod-security.kubernetes.io/audit: privileged
      8. pod-security.kubernetes.io/warn: privileged
      9. security.openshift.io/scc.podSecurityLabelSync: "false"
      10. ---
      11. apiVersion: v1
      12. kind: Pod
      13. metadata:
      14. name: nginx-pod
      15. namespace: test-namespace
      16. annotations:
      17. k8s.v1.cni.cncf.io/networks: '[
      18. {
      19. "name": "sriov-network",
      20. "namespace": "test-namespace",
      21. "interface": "ext0" (1)
      22. },
      23. {
      24. "name": "vlan-100",
      25. "namespace": "test-namespace",
      26. "interface": "ext0.100"
      27. }
      28. ]'
      29. spec:
      30. securityContext:
      31. runAsNonRoot: true
      32. containers:
      33. - name: nginx-container
      34. image: nginxinc/nginx-unprivileged:latest
      35. securityContext:
      36. allowPrivilegeEscalation: false
      37. capabilities:
      38. drop: ["ALL"]
      39. ports:
      40. - containerPort: 80
      41. seccompProfile:
      42. type: "RuntimeDefault"
      1The name to be used as the master for the VLAN interface.
    2. Apply the YAML file by running the following command:

      1. $ oc apply -f pod-a.yaml
  6. Get detailed information about the nginx-pod within the test-namespace by running the following command:

    1. $ oc describe pods nginx-pod -n test-namespace

    Example output

    1. Name: nginx-pod
    2. Namespace: test-namespace
    3. Priority: 0
    4. Node: worker-1/10.46.186.105
    5. Start Time: Mon, 14 Aug 2023 16:23:13 -0400
    6. Labels: <none>
    7. Annotations: k8s.ovn.org/pod-networks:
    8. {"default":{"ip_addresses":["10.131.0.26/23"],"mac_address":"0a:58:0a:83:00:1a","gateway_ips":["10.131.0.1"],"routes":[{"dest":"10.128.0.0...
    9. k8s.v1.cni.cncf.io/network-status:
    10. [{
    11. "name": "ovn-kubernetes",
    12. "interface": "eth0",
    13. "ips": [
    14. "10.131.0.26"
    15. ],
    16. "mac": "0a:58:0a:83:00:1a",
    17. "default": true,
    18. "dns": {}
    19. },{
    20. "name": "test-namespace/sriov-network",
    21. "interface": "ext0",
    22. "mac": "6e:a7:5e:3f:49:1b",
    23. "dns": {},
    24. "device-info": {
    25. "type": "pci",
    26. "version": "1.0.0",
    27. "pci": {
    28. "pci-address": "0000:d8:00.2"
    29. }
    30. }
    31. },{
    32. "name": "test-namespace/vlan-100",
    33. "interface": "ext0.100",
    34. "ips": [
    35. "1.1.1.1"
    36. ],
    37. "mac": "6e:a7:5e:3f:49:1b",
    38. "dns": {}
    39. }]
    40. k8s.v1.cni.cncf.io/networks:
    41. [ { "name": "sriov-network", "namespace": "test-namespace", "interface": "ext0" }, { "name": "vlan-100", "namespace": "test-namespace", "i...
    42. openshift.io/scc: privileged
    43. Status: Running
    44. IP: 10.131.0.26
    45. IPs:
    46. IP: 10.131.0.26

Creating a subinterface based on a bridge master interface in a container namespace

Creating a subinterface can be applied to other types of interfaces. Follow this procedure to create a subinterface based on a bridge master interface in a container namespace.

Prerequisites

  • You have installed the OpenShift CLI (oc).

  • You are logged in to the OKD cluster as a user with cluster-admin privileges.

Procedure

  1. Create a dedicated container namespace where you want to deploy your pod by running the following command:

    1. $ oc new-project test-namespace
  2. Using the following YAML example, create a bridge NetworkAttachmentDefinition custom resource (CR) file named bridge-nad.yaml:

    1. apiVersion: "k8s.cni.cncf.io/v1"
    2. kind: NetworkAttachmentDefinition
    3. metadata:
    4. name: bridge-network
    5. spec:
    6. config: '{
    7. "cniVersion": "0.4.0",
    8. "name": "bridge-network",
    9. "type": "bridge",
    10. "bridge": "br-001",
    11. "isGateway": true,
    12. "ipMasq": true,
    13. "hairpinMode": true,
    14. "ipam": {
    15. "type": "host-local",
    16. "subnet": "10.0.0.0/24",
    17. "routes": [{"dst": "0.0.0.0/0"}]
    18. }
    19. }'
  3. Run the following command to apply the NetworkAttachmentDefinition CR to your OKD cluster:

    1. $ oc apply -f bridge-nad.yaml
  4. Verify that the NetworkAttachmentDefinition CR has been created successfully by running the following command:

    1. $ oc get network-attachment-definitions

    Example output

    1. NAME AGE
    2. bridge-network 15s
  5. Using the following YAML example, create a file named ipvlan-additional-network-configuration.yaml for the IPVLAN additional network configuration:

    1. apiVersion: k8s.cni.cncf.io/v1
    2. kind: NetworkAttachmentDefinition
    3. metadata:
    4. name: ipvlan-net
    5. namespace: test-namespace
    6. spec:
    7. config: '{
    8. "cniVersion": "0.3.1",
    9. "name": "ipvlan-net",
    10. "type": "ipvlan",
    11. "master": "ext0", (1)
    12. "mode": "l3",
    13. "linkInContainer": true, (2)
    14. "ipam": {"type": "whereabouts", "ipRanges": [{"range": "10.0.0.0/24"}]}
    15. }'
    1Specifies the ethernet interface to associate with the network attachment. This is subsequently configured in the pod networks annotation.
    2Specifies that the master interface is in the container network namespace.
  6. Apply the YAML file by running the following command:

    1. $ oc apply -f ipvlan-additional-network-configuration.yaml
  7. Verify that the NetworkAttachmentDefinition CR has been created successfully by running the following command:

    1. $ oc get network-attachment-definitions

    Example output

    1. NAME AGE
    2. bridge-network 87s
    3. ipvlan-net 9s
  8. Using the following YAML example, create a file named pod-a.yaml for the pod definition:

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: pod-a
    5. namespace: test-namespace
    6. annotations:
    7. k8s.v1.cni.cncf.io/networks: '[
    8. {
    9. "name": "bridge-network",
    10. "interface": "ext0" (1)
    11. },
    12. {
    13. "name": "ipvlan-net",
    14. "interface": "ext1"
    15. }
    16. ]'
    17. spec:
    18. securityContext:
    19. runAsNonRoot: true
    20. seccompProfile:
    21. type: RuntimeDefault
    22. containers:
    23. - name: test-pod
    24. image: quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4
    25. securityContext:
    26. allowPrivilegeEscalation: false
    27. capabilities:
    28. drop: [ALL]
    1Specifies the name to be used as the master for the IPVLAN interface.
  9. Apply the YAML file by running the following command:

    1. $ oc apply -f pod-a.yaml
  10. Verify that the pod is running by using the following command:

    1. $ oc get pod -n test-namespace

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. pod-a 1/1 Running 0 2m36s
  11. Show network interface information about the pod-a resource within the test-namespace by running the following command:

    1. $ oc exec -n test-namespace pod-a -- ip a

    Example output

    1. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    2. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    3. inet 127.0.0.1/8 scope host lo
    4. valid_lft forever preferred_lft forever
    5. inet6 ::1/128 scope host
    6. valid_lft forever preferred_lft forever
    7. 3: eth0@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default
    8. link/ether 0a:58:0a:d9:00:5d brd ff:ff:ff:ff:ff:ff link-netnsid 0
    9. inet 10.217.0.93/23 brd 10.217.1.255 scope global eth0
    10. valid_lft forever preferred_lft forever
    11. inet6 fe80::488b:91ff:fe84:a94b/64 scope link
    12. valid_lft forever preferred_lft forever
    13. 4: ext0@if107: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    14. link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    15. inet 10.0.0.2/24 brd 10.0.0.255 scope global ext0
    16. valid_lft forever preferred_lft forever
    17. inet6 fe80::bcda:bdff:fe7e:f437/64 scope link
    18. valid_lft forever preferred_lft forever
    19. 5: ext1@ext0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
    20. link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff
    21. inet 10.0.0.1/24 brd 10.0.0.255 scope global ext1
    22. valid_lft forever preferred_lft forever
    23. inet6 fe80::beda:bd00:17e:f437/64 scope link
    24. valid_lft forever preferred_lft forever

    This output shows that the network interface ext1 is associated with the physical interface ext0.