Host LXC Installation

System Requirements for LXC Hosts

LXC requires the Linux kernel cgroups functionality which is available starting 2.6.24. Although you are not required to run these distributions, the following are recommended:

  • CentOS / RHEL: 6.3
  • Ubuntu: 12.04(.1)

The main requirement for LXC hypervisors is the libvirt and Qemu version. No matter what Linux distribution you are using, make sure the following requirements are met:

  • libvirt: 1.0.0 or higher
  • Qemu/KVM: 1.0 or higher

The default bridge in CloudStack is the Linux native bridge implementation (bridge module). CloudStack includes an option to work with OpenVswitch, the requirements are listed below

  • libvirt: 1.0.0 or higher
  • openvswitch: 1.7.1 or higher

In addition, the following hardware requirements apply:

  • Within a single cluster, the hosts must be of the same distribution version.
  • All hosts within a cluster must be homogenous. The CPUs must be of the same type, count, and feature flags.
  • Must support HVM (Intel-VT or AMD-V enabled)
  • 64-bit x86 CPU (more cores results in better performance)
  • 4 GB of memory
  • At least 1 NIC
  • When you deploy CloudStack, the hypervisor host must not have any VMs already running

LXC Installation Overview

LXC does not have any native system VMs, instead KVM will be used to run system VMs. This means that your host will need to support both LXC and KVM, thus most of the installation and configuration will be identical to the KVM installation. The material in this section doesn’t duplicate KVM installation docs. It provides the CloudStack-specific steps that are needed to prepare a KVM host to work with CloudStack.

Warning

Before continuing, make sure that you have applied the latest updates to your host.

Warning

It is NOT recommended to run services on this host not controlled by CloudStack.

The procedure for installing an LXC Host is:

  1. Prepare the Operating System
  2. Install and configure libvirt
  3. Configure Security Policies (AppArmor and SELinux)
  4. Install and configure the Agent

Prepare the Operating System

The OS of the Host must be prepared to host the CloudStack Agent and run KVM instances.

  1. Log in to your OS as root.

  2. Check for a fully qualified hostname.

    1. $ hostname --fqdn

    This should return a fully qualified hostname such as “kvm1.lab.example.org”. If it does not, edit /etc/hosts so that it does.

  3. Make sure that the machine can reach the Internet.

    1. $ ping www.cloudstack.org
  4. Turn on NTP for time synchronization.

    Note

    NTP is required to synchronize the clocks of the servers in your cloud. Unsynchronized clocks can cause unexpected problems.

    1. Install NTP

      1. $ yum install ntp
      1. $ apt-get install openntpd
  5. Repeat all of these steps on every hypervisor host.

Install and configure the Agent

To manage LXC instances on the host CloudStack uses a Agent. This Agent communicates with the Management server and controls all the instances on the host.

First we start by installing the agent:

In RHEL or CentOS:

  1. $ yum install cloudstack-agent

In Ubuntu:

  1. $ apt-get install cloudstack-agent

Next step is to update the Agent configuration setttings. The settings are in /etc/cloudstack/agent/agent.properties

  1. Set the Agent to run in LXC mode:

    1. hypervisor.type=lxc
  2. Optional: If you would like to use direct networking (instead of the default bridge networking), configure these lines:

    1. libvirt.vif.driver=com.cloud.hypervisor.kvm.resource.DirectVifDriver
    1. network.direct.source.mode=private
    1. network.direct.device=eth0

The host is now ready to be added to a cluster. This is covered in a later section, see Adding a Host. It is recommended that you continue to read the documentation before adding the host!

Install and Configure libvirt

CloudStack uses libvirt for managing virtual machines. Therefore it is vital that libvirt is configured correctly. Libvirt is a dependency of cloudstack-agent and should already be installed.

  1. In order to have live migration working libvirt has to listen for unsecured TCP connections. We also need to turn off libvirts attempt to use Multicast DNS advertising. Both of these settings are in /etc/libvirt/libvirtd.conf

    Set the following parameters:

    1. listen_tls = 0
    1. listen_tcp = 1
    1. tcp_port = "16509"
    1. auth_tcp = "none"
    1. mdns_adv = 0
  2. Turning on “listen_tcp” in libvirtd.conf is not enough, we have to change the parameters as well:

    On RHEL or CentOS modify /etc/sysconfig/libvirtd:

    Uncomment the following line:

    1. #LIBVIRTD_ARGS="--listen"

    On Ubuntu: modify /etc/default/libvirt-bin

    Add “-l” to the following line

    1. libvirtd_opts="-d"

    so it looks like:

    1. libvirtd_opts="-d -l"
  3. In order to have the VNC Console work we have to make sure it will bind on 0.0.0.0. We do this by editing /etc/libvirt/qemu.conf

    Make sure this parameter is set:

    1. vnc_listen = "0.0.0.0"
  4. Restart libvirt

    In RHEL or CentOS:

    1. $ service libvirtd restart

    In Ubuntu:

    1. $ service libvirt-bin restart

Configure the Security Policies

CloudStack does various things which can be blocked by security mechanisms like AppArmor and SELinux. These have to be disabled to ensure the Agent has all the required permissions.

  1. Configure SELinux (RHEL and CentOS)

    1. Check to see whether SELinux is installed on your machine. If not, you can skip this section.

      In RHEL or CentOS, SELinux is installed and enabled by default. You can verify this with:

      1. $ rpm -qa | grep selinux
    2. Set the SELINUX variable in /etc/selinux/config to “permissive”. This ensures that the permissive setting will be maintained after a system reboot.

      In RHEL or CentOS:

      1. $ vi /etc/selinux/config

      Change the following line

      1. SELINUX=enforcing

      to this

      1. SELINUX=permissive
    3. Then set SELinux to permissive starting immediately, without requiring a system reboot.

      1. $ setenforce permissive
  2. Configure Apparmor (Ubuntu)

    1. Check to see whether AppArmor is installed on your machine. If not, you can skip this section.

      In Ubuntu AppArmor is installed and enabled by default. You can verify this with:

      1. $ dpkg --list 'apparmor'
    2. Disable the AppArmor profiles for libvirt

      1. $ ln -s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/
      1. $ ln -s /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper /etc/apparmor.d/disable/
      1. $ apparmor_parser -R /etc/apparmor.d/usr.sbin.libvirtd
      1. $ apparmor_parser -R /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper

Configure the network bridges

Warning

This is a very important section, please make sure you read this thoroughly.

Note

This section details how to configure bridges using the native implementation in Linux. Please refer to the next section if you intend to use OpenVswitch

In order to forward traffic to your instances you will need at least two bridges: public and private.

By default these bridges are called cloudbr0 and cloudbr1, but you do have to make sure they are available on each hypervisor.

The most important factor is that you keep the configuration consistent on all your hypervisors.

Network example

There are many ways to configure your network. In the Basic networking mode you should have two (V)LAN’s, one for your private network and one for the public network.

We assume that the hypervisor has one NIC (eth0) with three tagged VLAN’s:

  1. VLAN 100 for management of the hypervisor
  2. VLAN 200 for public network of the instances (cloudbr0)
  3. VLAN 300 for private network of the instances (cloudbr1)

On VLAN 100 we give the Hypervisor the IP-Address 192.168.42.11/24 with the gateway 192.168.42.1

Note

The Hypervisor and Management server don’t have to be in the same subnet!

Configuring the network bridges

It depends on the distribution you are using how to configure these, below you’ll find examples for RHEL/CentOS and Ubuntu.

Note

The goal is to have two bridges called ‘cloudbr0’ and ‘cloudbr1’ after this section. This should be used as a guideline only. The exact configuration will depend on your network layout.

Configure in RHEL or CentOS

The required packages were installed when libvirt was installed, we can proceed to configuring the network.

First we configure eth0

  1. $ vi /etc/sysconfig/network-scripts/ifcfg-eth0

Make sure it looks similar to:

  1. DEVICE=eth0
  2. HWADDR=00:04:xx:xx:xx:xx
  3. ONBOOT=yes
  4. HOTPLUG=no
  5. BOOTPROTO=none
  6. TYPE=Ethernet

We now have to configure the three VLAN interfaces:

  1. $ vi /etc/sysconfig/network-scripts/ifcfg-eth0.100
  1. DEVICE=eth0.100
  2. HWADDR=00:04:xx:xx:xx:xx
  3. ONBOOT=yes
  4. HOTPLUG=no
  5. BOOTPROTO=none
  6. TYPE=Ethernet
  7. VLAN=yes
  8. IPADDR=192.168.42.11
  9. GATEWAY=192.168.42.1
  10. NETMASK=255.255.255.0
  1. $ vi /etc/sysconfig/network-scripts/ifcfg-eth0.200
  1. DEVICE=eth0.200
  2. HWADDR=00:04:xx:xx:xx:xx
  3. ONBOOT=yes
  4. HOTPLUG=no
  5. BOOTPROTO=none
  6. TYPE=Ethernet
  7. VLAN=yes
  8. BRIDGE=cloudbr0
  1. $ vi /etc/sysconfig/network-scripts/ifcfg-eth0.300
  1. DEVICE=eth0.300
  2. HWADDR=00:04:xx:xx:xx:xx
  3. ONBOOT=yes
  4. HOTPLUG=no
  5. BOOTPROTO=none
  6. TYPE=Ethernet
  7. VLAN=yes
  8. BRIDGE=cloudbr1

Now we have the VLAN interfaces configured we can add the bridges on top of them.

  1. $ vi /etc/sysconfig/network-scripts/ifcfg-cloudbr0

Now we just configure it is a plain bridge without an IP-Address

  1. DEVICE=cloudbr0
  2. TYPE=Bridge
  3. ONBOOT=yes
  4. BOOTPROTO=none
  5. IPV6INIT=no
  6. IPV6_AUTOCONF=no
  7. DELAY=5
  8. STP=yes

We do the same for cloudbr1

  1. $ vi /etc/sysconfig/network-scripts/ifcfg-cloudbr1
  1. DEVICE=cloudbr1
  2. TYPE=Bridge
  3. ONBOOT=yes
  4. BOOTPROTO=none
  5. IPV6INIT=no
  6. IPV6_AUTOCONF=no
  7. DELAY=5
  8. STP=yes

With this configuration you should be able to restart the network, although a reboot is recommended to see if everything works properly.

Warning

Make sure you have an alternative way like IPMI or ILO to reach the machine in case you made a configuration error and the network stops functioning!

Configure in Ubuntu

All the required packages were installed when you installed libvirt, so we only have to configure the network.

  1. $ vi /etc/network/interfaces

Modify the interfaces file to look like this:

  1. auto lo
  2. iface lo inet loopback
  3. # The primary network interface
  4. auto eth0.100
  5. iface eth0.100 inet static
  6. address 192.168.42.11
  7. netmask 255.255.255.240
  8. gateway 192.168.42.1
  9. dns-nameservers 8.8.8.8 8.8.4.4
  10. dns-domain lab.example.org
  11. # Public network
  12. auto cloudbr0
  13. iface cloudbr0 inet manual
  14. bridge_ports eth0.200
  15. bridge_fd 5
  16. bridge_stp off
  17. bridge_maxwait 1
  18. # Private network
  19. auto cloudbr1
  20. iface cloudbr1 inet manual
  21. bridge_ports eth0.300
  22. bridge_fd 5
  23. bridge_stp off
  24. bridge_maxwait 1

With this configuration you should be able to restart the network, although a reboot is recommended to see if everything works properly.

Warning

Make sure you have an alternative way like IPMI or ILO to reach the machine in case you made a configuration error and the network stops functioning!

Configuring the firewall

The hypervisor needs to be able to communicate with other hypervisors and the management server needs to be able to reach the hypervisor.

In order to do so we have to open the following TCP ports (if you are using a firewall):

  1. 22 (SSH)
  2. 1798
  3. 16509 (libvirt)
  4. 5900 - 6100 (VNC consoles)
  5. 49152 - 49216 (libvirt live migration)

It depends on the firewall you are using how to open these ports. Below you’ll find examples how to open these ports in RHEL/CentOS and Ubuntu.

Open ports in RHEL/CentOS

RHEL and CentOS use iptables for firewalling the system, you can open extra ports by executing the following iptable commands:

  1. $ iptables -I INPUT -p tcp -m tcp --dport 22 -j ACCEPT
  1. $ iptables -I INPUT -p tcp -m tcp --dport 1798 -j ACCEPT
  1. $ iptables -I INPUT -p tcp -m tcp --dport 16509 -j ACCEPT
  1. $ iptables -I INPUT -p tcp -m tcp --dport 5900:6100 -j ACCEPT
  1. $ iptables -I INPUT -p tcp -m tcp --dport 49152:49216 -j ACCEPT

These iptable settings are not persistent accross reboots, we have to save them first.

  1. $ iptables-save > /etc/sysconfig/iptables

Open ports in Ubuntu

The default firewall under Ubuntu is UFW (Uncomplicated FireWall), which is a Python wrapper around iptables.

To open the required ports, execute the following commands:

  1. $ ufw allow proto tcp from any to any port 22
  1. $ ufw allow proto tcp from any to any port 1798
  1. $ ufw allow proto tcp from any to any port 16509
  1. $ ufw allow proto tcp from any to any port 5900:6100
  1. $ ufw allow proto tcp from any to any port 49152:49216

Note

By default UFW is not enabled on Ubuntu. Executing these commands with the firewall disabled does not enable the firewall.

Add the host to CloudStack

The host is now ready to be added to a cluster. This is covered in a later section, see Adding a Host. It is recommended that you continue to read the documentation before adding the host!