Primary interface configuration

You can choose different ways to consume the host’s primary interface with VPP, usually with a tradeoff between performance and simplicity of configuration. Here are the main supported configurations.

  • virtio the interface is consumed with a native VPP driver. Performance is good and set up is simple, but only virtio interfaces are supported
  • avf we create a virtual function and consume it with a native VPP driver. Performance is good and setup simple, but only intel AVF interfaces are supported
  • af_packet the interface stays in Linux which passes packets to VPP. Performance is low, but it works out of the box with any interface
  • af_xdp packets are passed via eBPF. This requires a >=5.4 kernel, but works out of the box with good performance
  • dpdk the interface is removed from Linux and consumed with the dpdk library. Performance and support are good, but setup can be complex
  • other native VPP drivers bring better performance than dpdk but require complex manual setup

General mechanics

The calico-vpp-config ConfigMap section of the manifest file contains the key CALICOVPP_INTERFACES, which is a dictionary with parameters specific to interfaces in calicovpp:

  1. # Configures parameters for calicovpp agent and vpp manager
  2. CALICOVPP_INTERFACES: |-
  3. {
  4. "maxPodIfSpec": {
  5. "rx": 10, "tx": 10, "rxqsz": 1024, "txqsz": 1024
  6. },
  7. "defaultPodIfSpec": {
  8. "rx": 1, "tx":1, "isl3": true
  9. },
  10. "vppHostTapSpec": {
  11. "rx": 1, "tx":1, "rxqsz": 1024, "txqsz": 1024, "isl3": false
  12. },
  13. "uplinkInterfaces": [
  14. {
  15. "interfaceName": "eth1",
  16. "vppDriver": "af_packet"
  17. }
  18. ]
  19. }

The field uplinkInterfaces contains a list of interfaces and their configuration, with the first element being the primary/main interface, and the rest (if any) being the secondary host interfaces. The way the primary interface gets configured is controlled by the vppDriver field in uplinkInterfaces[0]. Leaving the vppDriver field empty (or unspecified) will try all drivers (except dpdk which needs to be specified explicitly) supported in your setup, starting with the most performant. You’ll still need to allocate hugepages if you want, for example, virtio to work.

Primary interface configuration - 图1note

CALICOVPP_NATIVE_DRIVER way of specifying the driver to use is still supported. Refer to Legacy options sub-section of Getting Started.

Using the native Virtio driver

You can use this driver if your primary interface is virtio [realpath /sys/bus/pci/devices/<PCI_ID>/driver gives .../virtio-net]

  • Ensure you have hugepages available on your system (sysctl -w vm.nr_hugepages=512)
  • Ensure vfio-pci is loaded (sudo modprobe vfio-pci)
  • For the primary interface, uplinkInterfaces[0], set vppDriver to “virtio”
  • Also ensure that your vpp config has no dpdk stanza and its plugin disabled

Optionally if you would like to set the number/size of rx queues, refer to UplinkInterfaceSpec sub-section of Getting Started.

Using the native AVF driver

You can use this driver if your primary interface is supported by AVF [realpath /sys/bus/pci/devices/<PCI_ID>/driver gives .../i40e]

  • Ensure vfio-pci is loaded (sudo modprobe vfio-pci)
  • For the primary interface, uplinkInterfaces[0], set vppDriver to “avf”
  • Also ensure that your vpp config has no dpdk stanza and its plugin disabled

Optionally if you would like to set the number/size of rx queues, refer to UplinkInterfaceSpec sub-section of Getting Started.

Using AF_XDP

Primary interface configuration - 图2caution

Ensure your kernel is at least 5.4 with uname -r

  • For the primary interface, uplinkInterfaces[0], set vppDriver to “af_xdp”
  • Also ensure that your vpp config has no dpdk stanza and its plugin disabled
  • Finally FELIX_XDPENABLED should be set to false on the calico-node container otherwise felix will periodically cleanup the VPP configuration
  1. ---
  2. kind: DaemonSet
  3. apiVersion: apps/v1
  4. metadata:
  5. name: calico-node
  6. namespace: kube-system
  7. spec:
  8. template:
  9. spec:
  10. containers:
  11. - name: calico-node
  12. env:
  13. - name: FELIX_XDPENABLED
  14. value: 'false'

With kustomize use kubectl kustomize ./yaml/overlays/af-xdp | kubectl apply -f -

Optionally if you would like to set the number/size of rx queues or if you would like to customize whether we busy-poll the interface (polling), only use interrupts to wake us up (interrupt) or switch between both depending on the load (adaptive), refer to UplinkInterfaceSpec sub-section of Getting Started.

Side notes

  • AF_XDP won’t start if you specify buffers { buffers-per-numa } to be too big (65536 should work)

Using AF_PACKET

  • For the primary interface, uplinkInterfaces[0], set vppDriver to “af_packet”
  • Also ensure that your vpp config has no dpdk stanza and the dpdk plugin is disabled

You can also use kubectl kustomize ./yaml/overlays/af-packet | kubectl apply -f -

Using DPDK

  • Ensure you have hugepages available on your system (sysctl -w vm.nr_hugepages=512)
  • For the primary interface, uplinkInterfaces[0], set vppDriver to “dpdk”

Using native drivers with vpp’s CLI

This is a rather advanced/experimental setup and we’ll take the example of the AVF driver for this, using vpp cli, but any vpp driver can be used. This allow to efficiently support other interface types.

  • For the primary interface, uplinkInterfaces[0], set vppDriver to “none”
  • Ensure that your vpp config has no dpdk stanza and the dpdk plugin is disabled
  • Lastly, add an exec /etc/vpp/startup.exec entry in unix { .. }
  1. vpp_config_template: |-
  2. unix {
  3. nodaemon
  4. full-coredump
  5. log /var/run/vpp/vpp.log
  6. cli-listen /var/run/vpp/cli.sock
  7. exec /etc/vpp/startup.exec
  8. }
  9. ...
  10. # removed dpdk { ... }
  11. ...
  12. plugins {
  13. plugin default { enable }
  14. plugin calico_plugin.so { enable }
  15. plugin dpdk_plugin.so { disable }
  16. }

Then update the CALICOVPP_CONFIG_EXEC_TEMPLATE environment variable to pass the interface creation cli(s).

  1. kind: DaemonSet
  2. apiVersion: apps/v1
  3. metadata:
  4. name: calico-vpp-node
  5. namespace: calico-vpp-dataplane
  6. spec:
  7. template:
  8. spec:
  9. containers:
  10. - name: vpp
  11. env:
  12. - name: CALICOVPP_CONFIG_EXEC_TEMPLATE
  13. value: 'create interface avf 0000:ab:cd.1 num-rx-queues 1'

In the specific case of the AVF driver, the PCI id must belong to a VF that can be created with the avf.sh script. Different drivers will have different requirements.