The VXLAN Plugin

System Requirements for VXLAN

In PRODUCT 4.X.0, this plugin only supports the KVM hypervisor with the standard linux bridge.

The following table lists the requirements for the hypervisor.

ItemRequirementNote
HypervisorKVMOvsVifDriver is not supported by this plugin in PRODUCT 4.X, use BridgeVifDriver (default).
Linux kernelversion >= 3.7, VXLAN kernel module enabledIt is recommended to use kernel >=3.9, since Linux kernel categorizes the VXLAN driver as experimental <3.9.
iproute2matches kernel version 

Table: Hypervisor Requirement for VXLAN

Linux Distributions that meet the requirements

The following table lists distributions which meet requirements.

DistributionRelease VersionKernel Version (Date confirmed)Note
Ubuntu13.043.8.0 (2013/07/23) 
Fedora>= 173.9.10 (2013/07/23)Latest kernel packages are available in “update” repository.
CentOS>= 6.52.6.32-431.3.1.el6.x86_64 (2014/01/21) 

Table: List of Linux distributions which meet the hypervisor requirements

Check the capability of your system

To check the capability of your system, execute the following commands.

  1. $ sudo modprobe vxlan && echo $?
  2. # Confirm the output is "0".
  3. # If it's non-0 value or error message, your kernel doesn't have VXLAN kernel module.
  4. $ ip link add type vxlan help
  5. # Confirm the output is usage of the command and that it's for VXLAN.
  6. # If it's not, your iproute2 utility doesn't support VXLAN.

Important note on MTU size

When new vxlan interfaces are created, kernel will obtain current MTU size of the physical interface (ethX or the bridge) and then create vxlan interface/bridge that are exactly 50 bytes smaller than the MTU on physical interface/bridge. This means that in order to support default MTU size of 1500 bytes inside VM, your vxlan interface/bridge must also have MTU of 1500 bytes, meaning that your physical interface/bridge must have MTU of at least 1550 bytes. In order to configure “jumbo frames” you can i.e. make physical interface/bridge with 9000 bytes MTU, then all the vxlan interfaces will be created with MTU of 8950 bytes, and then MTU size inside VM can be set to 8950 bytes.

Important note on max number of multicast groups (and thus VXLAN interfaces)

Default value of “net.ipv4.igmp_max_memberships” (cat /proc/sys/net/ipv4/igmp_max_memberships) is “20”, which means that host can be joined to max 20 multicast groups (attach max 20 multicast IPs on the host). Since all VXLAN (VTEP) interfaces provisioned on host are multicast-based (belong to certain multicast group, and thus has it’s own multicast IP that is used as VTEP), this means that you can not provision more than 20 (working) VXLAN interfaces per host. On Linux kernel 3.x you actually can provision more than 20, but ARP request will silently fail and cause client’s networking problems On Linux kernel 4.x you can NOT provision (start) more than 20 VXLAN interfaces and error message “No buffer space available” can be observed in Cloudstack Agent logs after provisioning required bridges and VXLAN interfaces. Increase needed parameter to sane value (i.e. 100 or 200) as required. If you need to operate more than 20 VMs from different client’s network, this change above is required.

Advanced: Build kernel and iproute2

Even if your system doesn’t support VXLAN, you can compile the kernel and iproute2 by yourself. The following procedure is an example for CentOS 6.4.

Build kernel

  1. $ sudo yum groupinstall "Development Tools"
  2. $ sudo yum install ncurses-devel hmaccalc zlib-devel binutils-devel elfutils-libelf-devel bc
  3. $ KERNEL_VERSION=3.10.4
  4. # Declare the kernel version you want to build.
  5. $ wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-${KERNEL_VERSION}.tar.xz
  6. $ tar xvf linux-${KERNEL_VERSION}.tar.xz
  7. $ cd linux-${KERNEL_VERSION}
  8. $ cp /boot/config-`uname -r` .config
  9. $ make oldconfig
  10. # You may keep hitting enter and choose the default.
  11. $ make menuconfig
  12. # Dig into "Device Drivers" -> "Network device support",
  13. # then select "Virtual eXtensible Local Area Network (VXLAN)" and hit space.
  14. # Make sure it indicates "<M>" (build as module), then Save and Exit.
  15. # You may also want to check "IPv4 NAT" and its child nodes in "IP: Netfilter Configuration"
  16. # and "IPv6 NAT" and its child nodes in "IPv6: Netfilter Configuration".
  17. # In 3.10.4, you can find the options in
  18. # "Networking support" -> "Networking options"
  19. # -> "Network packet filtering framework (Netfilter)".
  20. $ make # -j N
  21. # You may use -j N option to make the build process parallel and faster,
  22. # generally N = 1 + (cores your machine have).
  23. $ sudo make modules_install
  24. $ sudo make install
  25. # You would get an error like "ERROR: modinfo: could not find module XXXX" here.
  26. # This happens mainly due to config structure changes between kernel versions.
  27. # You can ignore this error, until you find you need the kernel module.
  28. # If you feel uneasy, you can go back to make menuconfig,
  29. # find module XXXX by using '/' key, enable the module, build and install the kernel again.
  30. $ sudo vi /etc/grub.conf
  31. # Make sure the new kernel isn't set as the default and the timeout is long enough,
  32. # so you can select the new kernel during boot process.
  33. # It's not a good idea to set the new kernel as the default until you confirm the kernel works fine.
  34. $ sudo reboot
  35. # Select the new kernel during the boot process.

Build iproute2

  1. $ sudo yum install db4-devel
  2. $ git clone git://git.kernel.org/pub/scm/linux/kernel/git/shemminger/iproute2.git
  3. $ cd iproute2
  4. $ git tag
  5. # Find the version that matches the kernel.
  6. # If you built kernel 3.10.4 as above, it would be v3.10.0.
  7. $ git checkout v3.10.0
  8. $ ./configure
  9. $ make # -j N
  10. $ sudo make install

Note

Please use rebuild kernel and tools at your own risk.

Configure PRODUCT to use VXLAN Plugin

Configure hypervisor

Configure hypervisor: KVM

In addition to “KVM Hypervisor Host Installation” in “PRODUCT Installation Guide”, you have to configure the following item on the host.

Create bridge interface with IPv4 address

This plugin requires an IPv4 address on the KVM host to terminate and originate VXLAN traffic. The address should be assinged to a physical interface or a bridge interface bound to a physical interface. Both a private address or a public address are fine for the purpose. It is not required to be in the same subnet for all hypervisors in a zone, but they should be able to reach each other via IP multicast with UDP/8472 port. A name of a physical interface or a name of a bridge interface bound to a physical interface can be used as a traffic label. Physical interface name fits for almost all cases, but if physical interface name differs per host, you may use a bridge to set a same name. If you would like to use a bridge name as a traffic label, you may create a bridge in this way.

Let cloudbr1 be the bridge interface for the instances’ private network.

Configure in RHEL or CentOS

When you configured the cloudbr1 interface as below,

  1. $ sudo vi /etc/sysconfig/network-scripts/ifcfg-cloudbr1
  1. DEVICE=cloudbr1
  2. TYPE=Bridge
  3. ONBOOT=yes
  4. BOOTPROTO=none
  5. IPV6INIT=no
  6. IPV6_AUTOCONF=no
  7. DELAY=5
  8. STP=yes

you would change the configuration similar to below.

  1. DEVICE=cloudbr1
  2. TYPE=Bridge
  3. ONBOOT=yes
  4. BOOTPROTO=static
  5. IPADDR=192.0.2.X
  6. NETMASK=255.255.255.0
  7. IPV6INIT=no
  8. IPV6_AUTOCONF=no
  9. DELAY=5
  10. STP=yes
Configure in Ubuntu

When you configured cloudbr1 as below,

  1. $ sudo vi /etc/network/interfaces
  1. auto lo
  2. iface lo inet loopback
  3. # The primary network interface
  4. auto eth0.100
  5. iface eth0.100 inet static
  6. address 192.168.42.11
  7. netmask 255.255.255.240
  8. gateway 192.168.42.1
  9. dns-nameservers 8.8.8.8 8.8.4.4
  10. dns-domain lab.example.org
  11. # Public network
  12. auto cloudbr0
  13. iface cloudbr0 inet manual
  14. bridge_ports eth0.200
  15. bridge_fd 5
  16. bridge_stp off
  17. bridge_maxwait 1
  18. # Private network
  19. auto cloudbr1
  20. iface cloudbr1 inet manual
  21. bridge_ports eth0.300
  22. bridge_fd 5
  23. bridge_stp off
  24. bridge_maxwait 1

you would change the configuration similar to below.

  1. auto lo
  2. iface lo inet loopback
  3. # The primary network interface
  4. auto eth0.100
  5. iface eth0.100 inet static
  6. address 192.168.42.11
  7. netmask 255.255.255.240
  8. gateway 192.168.42.1
  9. dns-nameservers 8.8.8.8 8.8.4.4
  10. dns-domain lab.example.org
  11. # Public network
  12. auto cloudbr0
  13. iface cloudbr0 inet manual
  14. bridge_ports eth0.200
  15. bridge_fd 5
  16. bridge_stp off
  17. bridge_maxwait 1
  18. # Private network
  19. auto cloudbr1
  20. iface cloudbr1 inet static
  21. addres 192.0.2.X
  22. netmask 255.255.255.0
  23. bridge_ports eth0.300
  24. bridge_fd 5
  25. bridge_stp off
  26. bridge_maxwait 1

Configure iptables to pass XVLAN packets

Since VXLAN uses UDP packet to forward encapsulated the L2 frames, UDP/8472 port must be opened.

Configure in RHEL or CentOS

RHEL and CentOS use iptables for firewalling the system, you can open extra ports by executing the following iptable commands:

  1. $ sudo iptables -I INPUT -p udp -m udp --dport 8472 -j ACCEPT

These iptable settings are not persistent accross reboots, we have to save them first.

  1. $ sudo iptables-save > /etc/sysconfig/iptables

With this configuration you should be able to restart the network, although a reboot is recommended to see if everything works properly.

  1. $ sudo service network restart
  2. $ sudo reboot

Warning

Make sure you have an alternative way like IPMI or ILO to reach the machine in case you made a configuration error and the network stops functioning!

Configure in Ubuntu

The default firewall under Ubuntu is UFW (Uncomplicated FireWall), which is a Python wrapper around iptables.

To open the required ports, execute the following commands:

  1. $ sudo ufw allow proto udp from any to any port 8472

Note

By default UFW is not enabled on Ubuntu. Executing these commands with the firewall disabled does not enable the firewall.

With this configuration you should be able to restart the network, although a reboot is recommended to see if everything works properly.

  1. $ sudo service networking restart
  2. $ sudo reboot

Warning

Make sure you have an alternative way like IPMI or ILO to reach the machine in case you made a configuration error and the network stops functioning!

Setup zone using VXLAN

In almost all parts of zone setup, you can just follow the advanced zone setup istruction in “PRODUCT Installation Guide” to use this plugin. It is not required to add a network element nor to reconfigure the network offering. The only thing you have to do is configure the physical network to use VXLAN as the isolation method for Guest Network.

Configure the physical network

../_images/vxlan-physicalnetwork.png

CloudStack needs to have one physical network for Guest Traffic with the isolation method set to “VXLAN”.

../_images/vxlan-trafficlabel.png

Guest Network traffic label should be the name of the physical interface or the name of the bridge interface and the bridge interface and they should have an IPv4 address. See ? for details.

Configure the guest traffic

../_images/vxlan-vniconfig.png

Specify a range of VNIs you would like to use for carrying guest network traffic.

Warning

VNI must be unique per zone and no duplicate VNIs can exist in the zone. Exercise care when designing your VNI allocation policy.