DC/OS Overlay

Understanding DC/OS Overlay

DC/OS Overlay is an SDN solution for UCR and Docker containers that comes pre-packaged with DC/OS and is enabled by default. DC/OS Overlay can run multiple virtual network instances in a given DC/OS cluster. Starting with DC/OS 1.11, DC/OS Overlay has support for creating IPv6 networks.

NOTE: IPv6 support is available only for Docker containers.

Features provided by DC/OS Overlay are:

  • Both Mesos and Docker containers can communicate from within a single node and between nodes on a cluster.
  • Services can be created such that their traffic is isolated from other traffic coming from any other virtual network or host in the cluster.
  • Removes the need to worry about potentially overlapping ports in applications, or the need to use nonstandard ports for services to avoid overlapping.
  • You can generate any number of instances of a class of tasks and have them all listen on the same port so that clients do not have to do port discovery.
  • You can run applications that require intra-cluster connectivity, like Cassandra, HDFS, and Riak.
  • You can create multiple virtual networks to isolate different portions of your organization, for instance, development, marketing, and production.

Details about the design and implementation of DC/OS Overlay can be found in the Overlay introduction. The default configuration of DC/OS Overlay provides an IPv4 virtual network, dcos, and an IPv6 virtual network dcos6 whose YAML configuration is as follows:

  1. dcos_overlay_network :
  2. vtep_subnet: 44.128.0.0/20
  3. vtep_subnet6: fd01:a::/64
  4. vtep_mac_oui: 70:B3:D5:00:00:00
  5. overlays:
  6. - name: dcos
  7. subnet: 9.0.0.0/8
  8. prefix: 24
  9. - name: dcos6
  10. subnet6: fd01:b::/64
  11. prefix6: 80

Each virtual network is identified by a canonical name (see limitations for constraints on naming virtual networks). Containers launched on a virtual network get an IP address from the subnet allocated to the virtual network. To remove the dependency on a global IPAM, the overlay subnet is further split into smaller subnets. Each of the smaller subnets is allocated to an agent. The agents can then use a host-local IPAM to allocate IP addresses from their respective subnets to containers launched on the agent and attached to the given overlay. The prefix determines the size of the subnet (carved from the overlay subnet) allocated to each agent and thus defines the number of agents on which the overlay can run. For instance, in the default configuration above the virtual network dcos is allocated a /8 subnet (in the “subnet” field), which is then divided into /26 container subnets to be used on each host that will be part of the network (in the “prefix” field) as shown:

Virtual network address space

Figure 1. Virtual network address space

The bits reserved for ContainerID (6 in this example) are then subdivided into two equal groups (of 5 bits, in this example) that are used for Mesos containers and Docker containers respectively. With the default configuration, each agent will be able to host a maximum of 2^5=32 Mesos containers and 32 docker containers. With this specific configuration, if a service tries to launch more than 32 tasks on the Mesos containerizer or the Docker containerizer, it will receive a TASK_FAILED. Consult the limitations section of the main Virtual Networks page to learn more about this constraint.

While the above example is specifically for an IPv4 virtual network, the same logic can be applied to the IPv6 virtual network dcos6 as well. The only difference is that currently IPv6 is supported only for Docker containers.

WARNING: Trying to launch a UCR container on dcos6 will result in a container launch failure.

You can modify the default virtual network configuration and can add more virtual networks if needed. Currently, you can only add or delete a virtual network at install time.

Adding virtual networks during installation

DC/OS virtual networks can only be added and configured at install time. To replace or add another virtual network, reinstall DC/OS according to these instructions.

The default network can be overriden, or additional virtual networks can be configured, by modifying the config.yaml file:

  1. agent_list:
  2. - 10.10.0.117
  3. - 10.10.0.116
  4. # Use this bootstrap_url value unless the DC/OS installer assets have been moved.
  5. bootstrap_url: file:///opt/dcos_install_tmp
  6. cluster_name: <cluster-name>
  7. master_discovery: static
  8. master_list:
  9. - 10.10.0.120
  10. - 10.10.0.119
  11. - 10.10.0.118
  12. resolvers:
  13. - 8.8.4.4
  14. - 8.8.8.8
  15. ssh_port: 22
  16. ssh_user: centos
  17. dcos_overlay_network:
  18. vtep_subnet: 44.128.0.0/20
  19. vtep_mac_oui: 70:B3:D5:00:00:00
  20. overlays:
  21. - name: dcos
  22. subnet: 9.0.0.0/8
  23. prefix: 26
  24. - name: dcos-1
  25. subnet: 192.168.0.0/16
  26. prefix: 24

In the above example, two virtual networks have been defined. The virtual network dcos retains the default virtual network, and another virtual network called dcos-1 with subnet range 192.168.0.0/16 has been added. In DC/OS Overlay, virtual networks must be associated with a name and a subnet. That name is used to launch Marathon tasks and other Mesos framework tasks using this specific virtual network (see usage). Due to restrictions on the size of Linux device names, the virtual network name must be less than thirteen characters. Consult the limitations section of the main Virtual Networks page to learn more.

Retrieving virtual network state

After DC/OS installation is complete, the virtual network configuration can be obtained from https://leader.mesos:5050/overlay-master/state endpoint. The network section, in below snippet, lists the current overlay configuration and the agents section is a list showing how overlays are split across the Mesos agents. The following shows the network state when there is a single overlay in the cluster named dcos.

  1. "agents": [
  2. {
  3. "ip": "172.17.0.2",
  4. "overlays": [
  5. {
  6. "backend": {
  7. "vxlan": {
  8. "vni": 1024,
  9. "vtep_ip": "44.128.0.1/20",
  10. "vtep_ip6": "fd01:a::1/64",
  11. "vtep_mac": "70:b3:d5:80:00:01",
  12. "vtep_name": "vtep1024"
  13. }
  14. },
  15. "info": {
  16. "name": "dcos",
  17. "prefix": 24,
  18. "subnet": "9.0.0.0/8"
  19. },
  20. "state": {
  21. "status": "STATUS_OK"
  22. },
  23. "subnet": "9.0.0.0/24"
  24. },
  25. {
  26. "backend": {
  27. "vxlan": {
  28. "vni": 1024,
  29. "vtep_ip": "44.128.0.1/20",
  30. "vtep_ip6": "fd01:a::1/64",
  31. "vtep_mac": "70:b3:d5:80:00:01",
  32. "vtep_name": "vtep1024"
  33. }
  34. },
  35. "info": {
  36. "name": "dcos6",
  37. "prefix6": 80,
  38. "subnet6": "fd01:b::/64"
  39. },
  40. "state": {
  41. "status": "STATUS_OK"
  42. },
  43. "subnet6": "fd01:b::/80"
  44. }
  45. ]
  46. },
  47. {
  48. "ip": "172.17.0.4",
  49. "overlays": [
  50. {
  51. "backend": {
  52. "vxlan": {
  53. "vni": 1024,
  54. "vtep_ip": "44.128.0.2/20",
  55. "vtep_ip6": "fd01:a::2/64",
  56. "vtep_mac": "70:b3:d5:80:00:02",
  57. "vtep_name": "vtep1024"
  58. }
  59. },
  60. "docker_bridge": {
  61. "ip": "9.0.1.128/25",
  62. "name": "d-dcos"
  63. },
  64. "info": {
  65. "name": "dcos",
  66. "prefix": 24,
  67. "subnet": "9.0.0.0/8"
  68. },
  69. "mesos_bridge": {
  70. "ip": "9.0.1.0/25",
  71. "name": "m-dcos"
  72. },
  73. "state": {
  74. "status": "STATUS_OK"
  75. },
  76. "subnet": "9.0.1.0/24"
  77. },
  78. {
  79. "backend": {
  80. "vxlan": {
  81. "vni": 1024,
  82. "vtep_ip": "44.128.0.2/20",
  83. "vtep_ip6": "fd01:a::2/64",
  84. "vtep_mac": "70:b3:d5:80:00:02",
  85. "vtep_name": "vtep1024"
  86. }
  87. },
  88. "docker_bridge": {
  89. "ip6": "fd01:b::1:8000:0:0/81",
  90. "name": "d-dcos6"
  91. },
  92. "info": {
  93. "name": "dcos6",
  94. "prefix6": 80,
  95. "subnet6": "fd01:b::/64"
  96. },
  97. "mesos_bridge": {
  98. "ip6": "fd01:b:0:0:1::/81",
  99. "name": "m-dcos6"
  100. },
  101. "state": {
  102. "status": "STATUS_OK"
  103. },
  104. "subnet6": "fd01:b:0:0:1::/80"
  105. }
  106. ]
  107. },
  108. {
  109. "ip": "172.17.0.3",
  110. "overlays": [
  111. {
  112. "backend": {
  113. "vxlan": {
  114. "vni": 1024,
  115. "vtep_ip": "44.128.0.3/20",
  116. "vtep_ip6": "fd01:a::3/64",
  117. "vtep_mac": "70:b3:d5:80:00:03",
  118. "vtep_name": "vtep1024"
  119. }
  120. },
  121. "docker_bridge": {
  122. "ip": "9.0.2.128/25",
  123. "name": "d-dcos"
  124. },
  125. "info": {
  126. "name": "dcos",
  127. "prefix": 24,
  128. "subnet": "9.0.0.0/8"
  129. },
  130. "mesos_bridge": {
  131. "ip": "9.0.2.0/25",
  132. "name": "m-dcos"
  133. },
  134. "state": {
  135. "status": "STATUS_OK"
  136. },
  137. "subnet": "9.0.2.0/24"
  138. },
  139. {
  140. "backend": {
  141. "vxlan": {
  142. "vni": 1024,
  143. "vtep_ip": "44.128.0.3/20",
  144. "vtep_ip6": "fd01:a::3/64",
  145. "vtep_mac": "70:b3:d5:80:00:03",
  146. "vtep_name": "vtep1024"
  147. }
  148. },
  149. "docker_bridge": {
  150. "ip6": "fd01:b::2:8000:0:0/81",
  151. "name": "d-dcos6"
  152. },
  153. "info": {
  154. "name": "dcos6",
  155. "prefix6": 80,
  156. "subnet6": "fd01:b::/64"
  157. },
  158. "mesos_bridge": {
  159. "ip6": "fd01:b:0:0:2::/81",
  160. "name": "m-dcos6"
  161. },
  162. "state": {
  163. "status": "STATUS_OK"
  164. },
  165. "subnet6": "fd01:b:0:0:2::/80"
  166. }
  167. ]
  168. }
  169. ],
  170. "network": {
  171. "overlays": [
  172. {
  173. "name": "dcos",
  174. "prefix": 24,
  175. "subnet": "9.0.0.0/8"
  176. },
  177. {
  178. "name": "dcos6",
  179. "prefix6": 80,
  180. "subnet6": "fd01:b::/64"
  181. }
  182. ],
  183. "vtep_mac_oui": "70:B3:D5:00:00:00",
  184. "vtep_subnet": "44.128.0.0/20",
  185. "vtep_subnet6": "fd01:a::/64"
  186. }
  187. }

Deleting Virtual Networks

To delete a virtual network, uninstall DC/OS, then delete the overlay replicated log on the master nodes and the iptable rules on the agent nodes that are associated with the virtual networks.

The Overlay Replicated Log

DC/OS Overlay uses a replicated log to persist the virtual network state across Mesos master reboots and to recover overlay state when a new Mesos master is elected. The overlay replicated log is stored at /var/lib/dcos/mesos/master/overlay_replicated_log. The overlay replicated log is not removed when DC/OS is uninstalled from the cluster, so you need to delete this log manually before reinstalling DC/OS. Otherwise, the Mesos master will try to reconcile the existing overlay replicated log during startup and will fail if it finds a virtual network that was not configured.

NOTE: The overlay replicated log is different from the master’s replicated log, which is stored at /var/lib/dcos/mesos/master/replicated_log. Removing the overlay replicated log will have no effect on the master’s recovery semantics.

iptables

The virtual networks install IPMASQ rules to allow containers to talk outside the virtual network. When you delete or replace virtual networks, you must remove the rules associated with the previous virtual networks. To remove the IPMASQ rules associated with each overlay, remove the IPMASQ rule from the POSTROUTING change of the NAT table that corresponds to the virtual networks subnet. Remove these rules on each agent node.

Replacing or Adding New Virtual Networks

To replace your virtual network, uninstall DC/OS and delete the overlay replicated log on the master nodes and the iptable rules on the agent nodes. Then, reinstall with the desired networks specified in your config.yaml file.

Troubleshooting

The Networking tab of the DC/OS web interface provides information helpful for troubleshooting. It contains information about the virtual networks from DC/OS Overlay that a container is associated with and the IP address of the container on that virtual network.

NOTE: The network tab currently displays information about containers that are associated with virtual networks managed by DC/OS Overlay. It does not have information about containers running on virtual networks managed by any other CNI/CNM provider`.

Limitations

  • DC/OS Overlay supports IPv6 networks only for Docker containers. Launching UCR containers on an IPv6 network will result in a container launch failure. To keep it future-proof, however, when a subnet is allocated to an agent, IPv6 subnets are pre-allocated to MesosContainerizer and DockerContainerizer following the same logic used for IPv4 networks.

  • DC/OS Overlay does not allow services to reserve IP addresses that result in ephemeral addresses for containers across multiple incarnations on the virtual network. This restriction ensures that a given client connects to the correct service.

DC/OS provides FQDNs in different zones that offer a clean way of accessing services through predictable URLs. If you are using DC/OS Overlay, you should use one of the FQDNs provided by the DC/OS DNS service to make it easy for clients to discover the location of your service.

  • The limitation on the total number of containers on DC/OS Overlay is the same value as the number of IP addresses available on the overlay subnet. However, the limitation on the number of containers on an agent depends on the subnet (which will be a subset of the overlay subnet) allocated to the agent. For a given agent subnet, half the address space is allocated to the MesosContainerizer and the other half is allocated to the DockerContainerizer.

  • In DC/OS Overlay, the subnet of a virtual network is sliced into smaller subnets and these smaller subnets are allocated to agents. When an agent has exhausted its allocated address range and a service tries to launch a container on the virtual network on this agent, the container launch will fail and the service will receive a TASK_FAILED message.

    Since there is no API to report the exhaustion of addresses on an agent, it is up to the service to conclude that containers cannot be launched on a virtual network due to lack of IP addresses on the agent. This limitation has a direct impact on the behavior of services, such as Marathon, that try to launch services with a specified number of instances. Due to this limitation, services such as Marathon might not be able to complete their obligation of launching a service on a virtual network if they try to launch instances of a service on an agent that has exhausted its allocated IP address range.

    Keep this limitation in mind when debugging issues on frameworks that use a virtual network and you see the TASK_FAILED message.

  • DC/OS Overlay uses Linux bridge devices on agents to connect Mesos and Docker containers to the virtual network. The names of these bridge devices are derived from the virtual network name. Since Linux has a limitation of fifteen characters on network device names, there is a character limit of thirteen characters for the virtual network name (two characters are used to distinguish between a CNI bridge and a Docker bridge on the virtual network).

  • Certain names are reserved and cannot be used as DC/OS Overlay names. The is because DC/OS Overlay uses Docker networking underneath to connect Docker containers to the overlay, which in turn reserves certain network names. The reserved names are: host, bridge and default.

  • Marathon health checks will not work with certain DC/OS Overlay configurations. If you are not using the default DC/OS Overlay configuration and Marathon is isolated from the virtual network, health checks will fail consistently even if the service is healthy.

    Marathon health checks will work in any of the following circumstances:

    • You are using the default DC/OS Overlay configuration.
    • Marathon has access to the virtual network.
    • You use a command health check.