Service IPs

Calico supports two approaches for assigning a service IP to a Calico-networked VM:

  • using a floating IP

  • using an additional fixed IP on the relevant Neutron port.

Both of these are standard Neutron practice - in other words, operations that have long been supported on the Neutron API. They are not Calico-specific, except insofar as the Calico driver needs to implement some of the low-level operations that are needed to make the expected semantics work.

The key semantic difference between those approaches is that:

  • With a floating IP, the target VM itself is not aware of the service IP. Instead, data sent to the floating IP is DNAT’d, to the target VM’s fixed IP, before that data reaches the target VM. So the target VM only ever sees data addressed to its fixed IP.

  • With the service IP as an additional fixed IP, the target VM is (and must be) aware of the service IP, because data addressed to the service IP reaches the target VM without any DNAT.

The use of floating IPs is already well known, so we won’t labour how to use those here. For some additional information on how Calico supports floating IPs, see Floating IPs.

The use and maintenance of additional fixed IPs, however, is not so well known, so in the following transcripts we demonstrate this approach for assigning a service IP to a Calico-networked VM.

We begin by creating a test VM that will be the target of the service IP.

Creating a test VM

  1. Check the name of the available CirrOS image.

    1. nova image-list

    It should return a list of the images and their names.

    1. WARNING: Command image-list is deprecated and will be removed after Nova 15.0.0 is released. Use python-glanceclient or openstackclient instead.
    2. +--------------------------------------+---------------------+--------+--------+
    3. | ID | Name | Status | Server |
    4. +--------------------------------------+---------------------+--------+--------+
    5. | b69ab3bd-2bbc-4086-b4ae-f01d9f6b5078 | cirros-0.3.2-x86_64 | ACTIVE | |
    6. | 866879b9-532b-44c6-a547-ac59de68df2d | ipv6_enabled_image | ACTIVE | |
    7. +--------------------------------------+---------------------+--------+--------+
  2. Boot a VM.

    1. nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64 --nic net-name=demo-net testvm1

    The response should look similar to the following.

    1. +--------------------------------------+------------------------------------------------------------+
    2. | Property | Value |
    3. +--------------------------------------+------------------------------------------------------------+
    4. | OS-DCF:diskConfig | MANUAL |
    5. | OS-EXT-AZ:availability_zone | nova |
    6. | OS-EXT-SRV-ATTR:host | - |
    7. | OS-EXT-SRV-ATTR:hypervisor_hostname | - |
    8. | OS-EXT-SRV-ATTR:instance_name | instance-0000000d |
    9. | OS-EXT-STS:power_state | 0 |
    10. | OS-EXT-STS:task_state | scheduling |
    11. | OS-EXT-STS:vm_state | building |
    12. | OS-SRV-USG:launched_at | - |
    13. | OS-SRV-USG:terminated_at | - |
    14. | accessIPv4 | |
    15. | accessIPv6 | |
    16. | adminPass | HKLzcUT5L52B |
    17. | config_drive | |
    18. | created | 2017-01-13T13:50:32Z |
    19. | flavor | m1.tiny (1) |
    20. | hostId | |
    21. | id | b6d8a3c4-9674-4972-9151-11107b60d622 |
    22. | image | cirros-0.3.2-x86_64 (b69ab3bd-2bbc-4086-b4ae-f01d9f6b5078) |
    23. | key_name | - |
    24. | metadata | {} |
    25. | name | testvm1 |
    26. | os-extended-volumes:volumes_attached | [] |
    27. | progress | 0 |
    28. | security_groups | default |
    29. | status | BUILD |
    30. | tenant_id | 26778b0f745143c5a9b0c7e1a621bb80 |
    31. | updated | 2017-01-13T13:50:32Z |
    32. | user_id | 7efbea74c20a4eeabc00b7740aa4d353 |
    33. +--------------------------------------+------------------------------------------------------------+
  3. Check when the VM has booted:

    1. nova list

    You should see your VM with the following statuses.

    1. +--------------------------------------+---------+--------+------------+-------------+----------------------------------------------+
    2. | ID | Name | Status | Task State | Power State | Networks |
    3. +--------------------------------------+---------+--------+------------+-------------+----------------------------------------------+
    4. | b6d8a3c4-9674-4972-9151-11107b60d622 | testvm1 | ACTIVE | - | Running | demo-net=10.28.0.13, fd5f:5d21:845:1c2e:2::d |
    5. +--------------------------------------+---------+--------+------------+-------------+----------------------------------------------+
  4. Use the following command to obtain the status of the VM.

    1. nova show testvm1

    It should return something like the following.

    1. +--------------------------------------+------------------------------------------------------------+
    2. | Property | Value |
    3. +--------------------------------------+------------------------------------------------------------+
    4. | OS-DCF:diskConfig | MANUAL |
    5. | OS-EXT-AZ:availability_zone | neil-fv-0-ubuntu-kilo-compute-node01 |
    6. | OS-EXT-SRV-ATTR:host | neil-fv-0-ubuntu-kilo-compute-node01 |
    7. | OS-EXT-SRV-ATTR:hypervisor_hostname | neil-fv-0-ubuntu-kilo-compute-node01 |
    8. | OS-EXT-SRV-ATTR:instance_name | instance-0000000d |
    9. | OS-EXT-STS:power_state | 1 |
    10. | OS-EXT-STS:task_state | - |
    11. | OS-EXT-STS:vm_state | active |
    12. | OS-SRV-USG:launched_at | 2017-01-13T13:50:39.000000 |
    13. | OS-SRV-USG:terminated_at | - |
    14. | accessIPv4 | |
    15. | accessIPv6 | |
    16. | config_drive | |
    17. | created | 2017-01-13T13:50:32Z |
    18. | demo-net network | 10.28.0.13, fd5f:5d21:845:1c2e:2::d |
    19. | flavor | m1.tiny (1) |
    20. | hostId | bf3ce3c7146ba6cafd43be03886de8755e2b5c8e9f71aa9bfafde9a0 |
    21. | id | b6d8a3c4-9674-4972-9151-11107b60d622 |
    22. | image | cirros-0.3.2-x86_64 (b69ab3bd-2bbc-4086-b4ae-f01d9f6b5078) |
    23. | key_name | - |
    24. | metadata | {} |
    25. | name | testvm1 |
    26. | os-extended-volumes:volumes_attached | [] |
    27. | progress | 0 |
    28. | security_groups | default |
    29. | status | ACTIVE |
    30. | tenant_id | 26778b0f745143c5a9b0c7e1a621bb80 |
    31. | updated | 2017-01-13T13:50:39Z |
    32. | user_id | 7efbea74c20a4eeabc00b7740aa4d353 |
    33. +--------------------------------------+------------------------------------------------------------+

    In this example, the VM has been given a fixed IP of 10.28.0.13.

  5. Let’s look at the corresponding Neutron port.

    1. neutron port-list

    It should look something like the following.

    1. +--------------------------------------+------+-------------------+------------------------------------------------------------------------------------------------+
    2. | id | name | mac_address | fixed_ips |
    3. +--------------------------------------+------+-------------------+------------------------------------------------------------------------------------------------+
    4. | 656b3617-570d-473e-a5dd-90b61cb0c49f | | fa:16:3e:4d:d5:25 | |
    5. | 9a7e0868-da7a-419e-a7ad-9d37e11091b8 | | fa:16:3e:28:a9:a4 | {"subnet_id": "0a1221f2-e6ed-413d-a040-62a266bd0d8f", "ip_address": "10.28.0.13"} |
    6. | | | | {"subnet_id": "345fec2e-6493-44de-a489-97b755c16dd4", "ip_address": "fd5f:5d21:845:1c2e:2::d"} |
    7. | a4b26bcc-ba94-4033-a9fc-edaf151c0c20 | | fa:16:3e:74:46:bd | |
    8. | a772a5e1-2f13-4fc3-96d5-fa1c29717637 | | fa:16:3e:c9:c6:8f | |
    9. +--------------------------------------+------+-------------------+------------------------------------------------------------------------------------------------+

Adding a service IP to the Neutron port as an extra fixed IP

Now we want to set up a service IP - let’s say 10.28.0.23 - that initially points to that VM, testvm1.

  1. One way to do that is to add the service IP as a second ‘fixed IP’ on the Neutron port.

    1. neutron port-update --fixed-ip subnet_id=0a1221f2-e6ed-413d-a040-62a266bd0d8f,ip_address=10.28.0.13 \
    2. --fixed-ip subnet_id=0a1221f2-e6ed-413d-a040-62a266bd0d8f,ip_address=10.28.0.23 9a7e0868-da7a-419e-a7ad-9d37e11091b8
  2. It should return a confirmation message.

    1. Updated port: 9a7e0868-da7a-419e-a7ad-9d37e11091b8
  3. Use the following command to get more information about the port.

    1. neutron port-show 9a7e0868-da7a-419e-a7ad-9d37e11091b8

    It should return a table like the following.

    1. +-----------------------+-----------------------------------------------------------------------------------+
    2. | Field | Value |
    3. +-----------------------+-----------------------------------------------------------------------------------+
    4. | admin_state_up | True |
    5. | allowed_address_pairs | |
    6. | binding:host_id | neil-fv-0-ubuntu-kilo-compute-node01 |
    7. | binding:profile | {} |
    8. | binding:vif_details | {"port_filter": true, "mac_address": "00:61:fe:ed:ca:fe"} |
    9. | binding:vif_type | tap |
    10. | binding:vnic_type | normal |
    11. | device_id | b6d8a3c4-9674-4972-9151-11107b60d622 |
    12. | device_owner | compute:None |
    13. | extra_dhcp_opts | |
    14. | fixed_ips | {"subnet_id": "0a1221f2-e6ed-413d-a040-62a266bd0d8f", "ip_address": "10.28.0.13"} |
    15. | | {"subnet_id": "0a1221f2-e6ed-413d-a040-62a266bd0d8f", "ip_address": "10.28.0.23"} |
    16. | id | 9a7e0868-da7a-419e-a7ad-9d37e11091b8 |
    17. | mac_address | fa:16:3e:28:a9:a4 |
    18. | name | |
    19. | network_id | 60651076-af2a-4c6d-8d64-500b53a4e547 |
    20. | security_groups | 75fccd0a-ef3d-44cd-91ec-ef22941f50f5 |
    21. | status | ACTIVE |
    22. | tenant_id | 26778b0f745143c5a9b0c7e1a621bb80 |
    23. +-----------------------+-----------------------------------------------------------------------------------+
  4. Now look at local IP routes.

    1. ip r

    We see that we have a route to 10.28.0.23.

    1. default via 10.240.0.1 dev eth0 proto static metric 100
    2. 10.28.0.13 via 192.168.8.3 dev l2tpeth8-1 proto bird
    3. 10.28.0.23 via 192.168.8.3 dev l2tpeth8-1 proto bird
    4. [...]

    Note that, on the machine where we’re running these commands:

    • BIRD is running, peered with the BIRDs that Calico runs on each compute node. That is what causes VM routes (including 10.28.0.23) to appear here.

    • 192.168.8.3 is the IP of the compute node that is hosting testvm1.

  5. We can also double check that 10.28.0.23 has appeared as a local device route on the relevant compute node.

    1. ip r

    It should return something like the following.

    1. default via 10.240.0.1 dev eth0
    2. 10.28.0.13 dev tap9a7e0868-da scope link
    3. 10.28.0.23 dev tap9a7e0868-da scope link
    4. 10.240.0.1 dev eth0 scope link
    5. 192.168.8.0/24 dev l2tpeth8-3 proto kernel scope link src 192.168.8.3
    6. 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1

    We also need - because with this approach, data that is addressed to 10.28.0.23 will be routed to the VM without any NAT - to tell the VM itself that it has the extra 10.28.0.23 address.

  6. SSH into the VM.

    1. core@access-node$ ssh cirros@10.28.0.13
    2. cirros@10.28.0.13's password:
  7. From inside the VM, issue the following command to list the interfaces.

    1. ip a

    It should return something like the following.

    1. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
    2. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    3. inet 127.0.0.1/8 scope host lo
    4. inet6 ::1/128 scope host
    5. valid_lft forever preferred_lft forever
    6. 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    7. link/ether fa:16:3e:28:a9:a4 brd ff:ff:ff:ff:ff:ff
    8. inet 10.28.0.13/16 brd 10.28.255.255 scope global eth0
    9. inet6 fe80::f816:3eff:fe28:a9a4/64 scope link
    10. valid_lft forever preferred_lft forever
  8. Next, issue the following command.

    1. sudo ip a a 10.28.0.23/16 dev eth0
  9. List the interfaces again.

    1. ip a

    The interfaces should now look more like the following.

    1. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
    2. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    3. inet 127.0.0.1/8 scope host lo
    4. inet6 ::1/128 scope host
    5. valid_lft forever preferred_lft forever
    6. 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    7. link/ether fa:16:3e:28:a9:a4 brd ff:ff:ff:ff:ff:ff
    8. inet 10.28.0.13/16 brd 10.28.255.255 scope global eth0
    9. inet 10.28.0.23/16 scope global secondary eth0
    10. inet6 fe80::f816:3eff:fe28:a9a4/64 scope link
    11. valid_lft forever preferred_lft forever
  10. Exit the SSH session.

    1. Connection to 10.28.0.13 closed.
  11. And now we can access the VM on its service IP, as shown below.

    1. core@access-node$ ssh cirros@10.28.0.23
    2. The authenticity of host '10.28.0.23 (10.28.0.23)' can't be established.
    3. RSA key fingerprint is 65:a5:b0:0c:e2:c4:ac:94:2a:0c:64:b8:bc:5a:aa:66.
    4. Are you sure you want to continue connecting (yes/no)? yes
    5. Warning: Permanently added '10.28.0.23' (RSA) to the list of known hosts.
    6. cirros@10.28.0.23's password:
    7. $

    Note that we already have security set up that allows SSH to the instance from our access machine (192.168.8.1).

  12. You can check this by listing the security groups.

    1. neutron security-group-list

    It should return something like the following.

    1. +--------------------------------------+---------+----------------------------------------------------------------------+
    2. | id | name | security_group_rules |
    3. +--------------------------------------+---------+----------------------------------------------------------------------+
    4. | 75fccd0a-ef3d-44cd-91ec-ef22941f50f5 | default | egress, IPv4 |
    5. | | | egress, IPv6 |
    6. | | | ingress, IPv4, 22/tcp, remote_ip_prefix: 192.168.8.1/32 |
    7. | | | ingress, IPv4, remote_group_id: 75fccd0a-ef3d-44cd-91ec-ef22941f50f5 |
    8. | | | ingress, IPv6, remote_group_id: 75fccd0a-ef3d-44cd-91ec-ef22941f50f5 |
    9. | 903d9936-ce72-4756-a2cc-7c95a846e7e5 | default | egress, IPv4 |
    10. | | | egress, IPv6 |
    11. | | | ingress, IPv4, 22/tcp, remote_ip_prefix: 192.168.8.1/32 |
    12. | | | ingress, IPv4, remote_group_id: 903d9936-ce72-4756-a2cc-7c95a846e7e5 |
    13. | | | ingress, IPv6, remote_group_id: 903d9936-ce72-4756-a2cc-7c95a846e7e5 |
    14. +--------------------------------------+---------+----------------------------------------------------------------------+

Moving the service IP to another VM

Service IPs are often used for HA, so need to be moved to target a different VM if the first one fails for some reason (or if the HA system just decides to cycle the active VM).

  1. To demonstrate that we create a second test VM.

    1. nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64 --nic net-name=demo-net testvm2
  2. List the VMs.

    1. nova list

    You should see the new VM in the list.

    1. +--------------------------------------+---------+--------+------------+-------------+----------------------------------------------+
    2. | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+---------+--------+------------+-------------+----------------------------------------------+
    3. | b6d8a3c4-9674-4972-9151-11107b60d622 | testvm1 | ACTIVE | - | Running | demo-net=10.28.0.13, 10.28.0.23 |
    4. | bb4ef5e3-dc77-472e-af6f-3f0d8c3e5a6d | testvm2 | ACTIVE | - | Running | demo-net=10.28.0.14, fd5f:5d21:845:1c2e:2::e |
    5. +--------------------------------------+---------+--------+------------+-------------+----------------------------------------------+
  3. Check the ports.

    1. neutron port-list

    It should return something like the following.

    1. +--------------------------------------+------+-------------------+------------------------------------------------------------------------------------------------+
    2. | id | name | mac_address | fixed_ips |
    3. +--------------------------------------+------+-------------------+------------------------------------------------------------------------------------------------+
    4. | 656b3617-570d-473e-a5dd-90b61cb0c49f | | fa:16:3e:4d:d5:25 | |
    5. | 7627a298-a2db-4a1a-bc07-9f0f10f58363 | | fa:16:3e:8e:dc:33 | {"subnet_id": "0a1221f2-e6ed-413d-a040-62a266bd0d8f", "ip_address": "10.28.0.14"} |
    6. | | | | {"subnet_id": "345fec2e-6493-44de-a489-97b755c16dd4", "ip_address": "fd5f:5d21:845:1c2e:2::e"} |
    7. | 9a7e0868-da7a-419e-a7ad-9d37e11091b8 | | fa:16:3e:28:a9:a4 | {"subnet_id": "0a1221f2-e6ed-413d-a040-62a266bd0d8f", "ip_address": "10.28.0.13"} |
    8. | | | | {"subnet_id": "0a1221f2-e6ed-413d-a040-62a266bd0d8f", "ip_address": "10.28.0.23"} |
    9. | a4b26bcc-ba94-4033-a9fc-edaf151c0c20 | | fa:16:3e:74:46:bd | |
    10. | a772a5e1-2f13-4fc3-96d5-fa1c29717637 | | fa:16:3e:c9:c6:8f | |
    11. +--------------------------------------+------+-------------------+------------------------------------------------------------------------------------------------+
  4. Remove the service IP from the first VM.

    1. neutron port-update --fixed-ip subnet_id=0a1221f2-e6ed-413d-a040-62a266bd0d8f,ip_address=10.28.0.13 9a7e0868-da7a-419e-a7ad-9d37e11091b8
  5. And add it to the second.

    1. neutron port-update --fixed-ip subnet_id=0a1221f2-e6ed-413d-a040-62a266bd0d8f,ip_address=10.28.0.14 \
    2. --fixed-ip subnet_id=0a1221f2-e6ed-413d-a040-62a266bd0d8f,ip_address=10.28.0.23 7627a298-a2db-4a1a-bc07-9f0f10f58363
  6. SSH into testvm2.

    1. core@access-node$ ssh cirros@10.28.0.14
    2. The authenticity of host '10.28.0.14 (10.28.0.14)' can't be established.
    3. RSA key fingerprint is 6a:02:7f:3a:bf:0c:91:de:c4:d6:e7:f6:81:3f:6a:85.
    4. Are you sure you want to continue connecting (yes/no)? yes
    5. Warning: Permanently added '10.28.0.14' (RSA) to the list of known hosts.
    6. cirros@10.28.0.14's password:
  7. Tell testvm2 that it now has the service IP 10.28.0.23.

    1. sudo ip a a 10.28.0.23/16 dev eth0
  8. Now connections to 10.28.0.23 go to testvm2

    1. core@access-node$ ssh cirros@10.28.0.23
    2. @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    3. @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
    4. @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    5. IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
    6. Someone could be eavesdropping on you right now (man-in-the-middle attack)!
    7. It is also possible that a host key has just been changed.
    8. The fingerprint for the RSA key sent by the remote host is
    9. 6a:02:7f:3a:bf:0c:91:de:c4:d6:e7:f6:81:3f:6a:85.
    10. Please contact your system administrator.
    11. Add correct host key in /home/core/.ssh/known_hosts to get rid of this message.
    12. Offending RSA key in /home/core/.ssh/known_hosts:4
    13. RSA host key for 10.28.0.23 has changed and you have requested strict checking.
    14. Host key verification failed.
  9. Remove the known_hosts files.

    1. rm ~/.ssh/known_hosts
  10. Try again to SSH into the VM.

    1. core@access-node$ ssh cirros@10.28.0.23
    2. The authenticity of host '10.28.0.23 (10.28.0.23)' can't be established.
    3. RSA key fingerprint is 6a:02:7f:3a:bf:0c:91:de:c4:d6:e7:f6:81:3f:6a:85.
    4. Are you sure you want to continue connecting (yes/no)? yes
    5. Warning: Permanently added '10.28.0.23' (RSA) to the list of known hosts.
    6. cirros@10.28.0.23's password:
  11. Check the host name.

    1. hostname

    It should return:

    1. testvm2
  12. Check the interfaces.

    1. ip a

    They should look something like the following.

    1. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
    2. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    3. inet 127.0.0.1/8 scope host lo
    4. inet6 ::1/128 scope host
    5. valid_lft forever preferred_lft forever
    6. 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    7. link/ether fa:16:3e:8e:dc:33 brd ff:ff:ff:ff:ff:ff
    8. inet 10.28.0.14/16 brd 10.28.255.255 scope global eth0
    9. inet 10.28.0.23/16 scope global secondary eth0
    10. inet6 fe80::f816:3eff:fe8e:dc33/64 scope link
    11. valid_lft forever preferred_lft forever
    12. $