Test networking

In this lab we will test the Calico cluster to demonstrate networking is working correctly.

Pod to pod pings

Create three busybox instances

  1. kubectl create deployment pingtest --image=busybox --replicas=3 -- sleep infinity

Check their IP addresses

  1. kubectl get pods --selector=app=pingtest --output=wide

Result

  1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  2. pingtest-b4b6f8cf-b5z78 1/1 Running 0 3m28s 192.168.38.128 ip-172-31-37-123 <none> <none>
  3. pingtest-b4b6f8cf-jmzq6 1/1 Running 0 3m28s 192.168.45.193 ip-172-31-40-217 <none> <none>
  4. pingtest-b4b6f8cf-rn9nm 1/1 Running 0 3m28s 192.168.60.64 ip-172-31-45-29 <none> <none>

Note the IP addresses of the second two pods, then exec into the first one. For example

  1. kubectl exec -ti pingtest-b4b6f8cf-b5z78 -- sh

From inside the pod, ping the other two pod IP addresses. For example

  1. ping 192.168.45.193 -c 4

Result

  1. PING 192.168.45.193 (192.168.45.193): 56 data bytes
  2. 64 bytes from 192.168.45.193: seq=0 ttl=62 time=1.847 ms
  3. 64 bytes from 192.168.45.193: seq=1 ttl=62 time=0.684 ms
  4. 64 bytes from 192.168.45.193: seq=2 ttl=62 time=0.488 ms
  5. 64 bytes from 192.168.45.193: seq=3 ttl=62 time=0.442 ms
  6. --- 192.168.45.193 ping statistics ---
  7. 4 packets transmitted, 4 packets received, 0% packet loss
  8. round-trip min/avg/max = 0.442/0.865/1.847 ms

Check routes

From one of the nodes, verify that routes exist to each of the pingtest pods’ IP addresses. For example

  1. ip route get 192.168.38.128

Result

  1. 192.168.38.128 via 172.31.37.123 dev eth0 src 172.31.42.47 uid 1000
  2. cache

The via 172.31.37.123 in this example indicates the next-hop for this pod IP, which matches the IP address of the node the pod is scheduled on, as expected.

IPAM allocations from different pools

Recall that we created two IP pools, but left one disabled.

  1. calicoctl get ippools -o wide

Result

  1. NAME CIDR NAT IPIPMODE VXLANMODE DISABLED SELECTOR
  2. pool1 192.168.0.0/18 true Never Never false all()
  3. pool2 192.168.192.0/19 true Never Never true all()

Enable the second pool.

  1. calicoctl apply -f - <<EOF
  2. apiVersion: projectcalico.org/v3
  3. kind: IPPool
  4. metadata:
  5. name: pool2
  6. spec:
  7. cidr: 192.168.192.0/19
  8. ipipMode: Never
  9. natOutgoing: true
  10. disabled: false
  11. nodeSelector: all()
  12. EOF

Create a pod, explicitly requesting an address from pool2

  1. kubectl apply -f - <<EOF
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: pingtest-pool2
  6. annotations:
  7. cni.projectcalico.org/ipv4pools: "[\"pool2\"]"
  8. spec:
  9. containers:
  10. - args:
  11. - sleep
  12. - infinity
  13. image: busybox
  14. imagePullPolicy: Always
  15. name: pingtest
  16. EOF

Verify it has an IP address from pool2

  1. kubectl get pod pingtest-pool2 -o wide

Result

  1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  2. pingtest-pool2 1/1 Running 0 75s 192.168.219.0 ip-172-31-45-29 <none> <none>

From one of the original pingtest pods, ping the IP address.

  1. ping 192.168.219.0 -c 4

Result

  1. PING 192.168.219.0 (192.168.219.0): 56 data bytes
  2. 64 bytes from 192.168.219.0: seq=0 ttl=62 time=0.524 ms
  3. 64 bytes from 192.168.219.0: seq=1 ttl=62 time=0.459 ms
  4. 64 bytes from 192.168.219.0: seq=2 ttl=62 time=0.505 ms
  5. 64 bytes from 192.168.219.0: seq=3 ttl=62 time=0.492 ms
  6. --- 192.168.219.0 ping statistics ---
  7. 4 packets transmitted, 4 packets received, 0% packet loss
  8. round-trip min/avg/max = 0.459/0.495/0.524 ms

Clean up

  1. kubectl delete deployments.apps pingtest
  2. kubectl delete pod pingtest-pool2

Next

Test network policy