Kube-OVN provides a kubectl plugin to help better diagnose container network. You can use this plugin to tcpdump a specific pod, trace a specific packet or query ovn-nb/ovn-sb.

Prerequisite

To enable kubectl plugin, kubectl version of 1.12 or later is recommended. You can use kubectl version to check the version.

Install

  1. Get the kubectl-ko file

    1. wget https://raw.githubusercontent.com/alauda/kube-ovn/master/dist/images/kubectl-ko
  2. Move the file to one of $PATH directories

    1. mv kubectl-ko /usr/local/bin/kubectl-ko
  3. Add executable permission to kubectl-ko

    1. chmod +x /usr/local/bin/kubectl-ko
  4. Check if the plugin is ready
    ```bash
    [root@kube-ovn01 ~]# kubectl plugin list
    The following compatible plugins are available:

/usr/local/bin/kubectl-ko

  1. # Usage
  2. ```bash
  3. kubectl ko {subcommand} [option...]
  4. Available Subcommands:
  5. nbctl [ovn-nbctl options ...] invoke ovn-nbctl
  6. sbctl [ovn-sbctl options ...] invoke ovn-sbctl
  7. vsctl {nodeName} [ovs-vsctl options ...] invoke ovs-vsctl on selected node
  8. tcpdump {namespace/podname} [tcpdump options ...] capture pod traffic
  9. trace {namespace/podname} {target ip address} {icmp|tcp|udp} [target tcp or udp port]
  10. diagnose {all|node} [nodename] diagnose connectivity of all nodes or a specific node
  1. Show ovn-sb overview
  1. [root@node2 ~]# kubectl ko sbctl show
  2. Chassis "36f129a9-276f-4d96-964b-7d3703001b81"
  3. hostname: "node1.cluster.local"
  4. Encap geneve
  5. ip: "10.0.129.96"
  6. options: {csum="true"}
  7. Port_Binding "tiller-deploy-849b7c6496-5l9r6.kube-system"
  8. Port_Binding "kube-ovn-pinger-5mq4g.kube-ovn"
  9. Port_Binding "nginx-6b4b85b77b-rk9tq.acl"
  10. Port_Binding "node-node1"
  11. Port_Binding "piquant-magpie-nginx-ingress-default-backend-84776f949b-jthhh.kube-system"
  12. Port_Binding "ds1-l6n7p.default"
  13. Chassis "9ced77f4-dae4-4e0b-b3fe-15dd82104e67"
  14. hostname: "node2.cluster.local"
  15. Encap geneve
  16. ip: "10.0.128.15"
  17. options: {csum="true"}
  18. Port_Binding "ds1-wqpdz.default"
  19. Port_Binding "node-node2"
  20. Port_Binding "kube-ovn-pinger-8xhhv.kube-ovn"
  21. Chassis "dc922a96-97d4-418d-a45f-8989d2b6dc91"
  22. hostname: "node3.cluster.local"
  23. Encap geneve
  24. ip: "10.0.128.35"
  25. options: {csum="true"}
  26. Port_Binding "ds1-dflpx.default"
  27. Port_Binding "coredns-585c7897d4-59xkc.kube-system"
  28. Port_Binding "node-node3"
  29. Port_Binding "kube-ovn-pinger-gc8l6.kube-ovn"
  30. Port_Binding "coredns-585c7897d4-7dglw.kube-system"
  1. Dump pod ICMP traffic
    ```bash
    [root@node2 ~]# kubectl ko tcpdump default/ds1-l6n7p icmp
  • kubectl exec -it kube-ovn-cni-wlg4s -n kube-ovn — tcpdump -nn -i d7176fe7b4e0_h icmp
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on d7176fe7b4e0_h, link-type EN10MB (Ethernet), capture size 262144 bytes
    06:52:36.619688 IP 100.64.0.3 > 10.16.0.4: ICMP echo request, id 2, seq 1, length 64
    06:52:36.619746 IP 10.16.0.4 > 100.64.0.3: ICMP echo reply, id 2, seq 1, length 64
    06:52:37.619588 IP 100.64.0.3 > 10.16.0.4: ICMP echo request, id 2, seq 2, length 64
    06:52:37.619630 IP 10.16.0.4 > 100.64.0.3: ICMP echo reply, id 2, seq 2, length 64
    06:52:38.619933 IP 100.64.0.3 > 10.16.0.4: ICMP echo request, id 2, seq 3, length 64
    06:52:38.619973 IP 10.16.0.4 > 100.64.0.3: ICMP echo reply, id 2, seq 3, length 64
    ```
  1. Show ovn flow from a pod to a destination
  1. [root@node2 ~]# kubectl ko trace default/ds1-l6n7p 8.8.8.8 icmp
  2. + kubectl exec ovn-central-5bc494cb5-np9hm -n kube-ovn -- ovn-trace --ct=new ovn-default 'inport == "ds1-l6n7p.default" && ip.ttl == 64 && icmp && eth.src == 0a:00:00:10:00:05 && ip4.src == 10.16.0.4 && eth.dst == 00:00:00:B8:CA:43 && ip4.dst == 8.8.8.8'
  3. # icmp,reg14=0xf,vlan_tci=0x0000,dl_src=0a:00:00:10:00:05,dl_dst=00:00:00:b8:ca:43,nw_src=10.16.0.4,nw_dst=8.8.8.8,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=0,icmp_code=0
  4. ingress(dp="ovn-default", inport="ds1-l6n7p.default")
  5. -----------------------------------------------------
  6. 0. ls_in_port_sec_l2 (ovn-northd.c:4143): inport == "ds1-l6n7p.default" && eth.src == {0a:00:00:10:00:05}, priority 50, uuid 39453393
  7. next;
  8. 1. ls_in_port_sec_ip (ovn-northd.c:2898): inport == "ds1-l6n7p.default" && eth.src == 0a:00:00:10:00:05 && ip4.src == {10.16.0.4}, priority 90, uuid 81bcd485
  9. next;
  10. 3. ls_in_pre_acl (ovn-northd.c:3269): ip, priority 100, uuid 7b4f4971
  11. reg0[0] = 1;
  12. next;
  13. 5. ls_in_pre_stateful (ovn-northd.c:3396): reg0[0] == 1, priority 100, uuid 36cdd577
  14. ct_next;
  15. ct_next(ct_state=new|trk)
  16. -------------------------
  17. 6. ls_in_acl (ovn-northd.c:3759): ip && (!ct.est || (ct.est && ct_label.blocked == 1)), priority 1, uuid 7608af5b
  18. reg0[1] = 1;
  19. next;
  20. 10. ls_in_stateful (ovn-northd.c:3995): reg0[1] == 1, priority 100, uuid 2aba1b90
  21. ct_commit(ct_label=0/0x1);
  22. next;
  23. 16. ls_in_l2_lkup (ovn-northd.c:4470): eth.dst == 00:00:00:b8:ca:43, priority 50, uuid 5c9c3c9f
  24. outport = "ovn-default-ovn-cluster";
  25. output;
  26. ....Skip More....
  1. Diagnose network connectivity
  1. [root@node2 ~]# kubectl ko diagnose all
  2. ### start to diagnose node node1
  3. I1008 07:04:40.475604 26434 ping.go:139] ovs-vswitchd and ovsdb are up
  4. I1008 07:04:40.570824 26434 ping.go:151] ovn_controller is up
  5. I1008 07:04:40.570859 26434 ping.go:35] start to check node connectivity
  6. I1008 07:04:44.586096 26434 ping.go:57] ping node: node1 10.0.129.96, count: 5, loss rate 0.00%, average rtt 0.23ms
  7. I1008 07:04:44.592764 26434 ping.go:57] ping node: node3 10.0.128.35, count: 5, loss rate 0.00%, average rtt 0.63ms
  8. I1008 07:04:44.592791 26434 ping.go:57] ping node: node2 10.0.128.15, count: 5, loss rate 0.00%, average rtt 0.54ms
  9. I1008 07:04:44.592889 26434 ping.go:74] start to check pod connectivity
  10. I1008 07:04:48.669057 26434 ping.go:101] ping pod: kube-ovn-pinger-5mq4g 10.16.0.12, count: 5, loss rate 0.00, average rtt 0.18ms
  11. I1008 07:04:48.769217 26434 ping.go:101] ping pod: kube-ovn-pinger-8xhhv 10.16.0.10, count: 5, loss rate 0.00, average rtt 0.64ms
  12. I1008 07:04:48.769219 26434 ping.go:101] ping pod: kube-ovn-pinger-gc8l6 10.16.0.13, count: 5, loss rate 0.00, average rtt 0.73ms
  13. I1008 07:04:48.769325 26434 ping.go:119] start to check dns connectivity
  14. I1008 07:04:48.777062 26434 ping.go:129] resolve dns kubernetes.default.svc.cluster.local to [10.96.0.1] in 7.71ms
  15. ### finish diagnose node node1
  16. ### start to diagnose node node2
  17. I1008 07:04:49.231462 16925 ping.go:139] ovs-vswitchd and ovsdb are up
  18. I1008 07:04:49.241636 16925 ping.go:151] ovn_controller is up
  19. I1008 07:04:49.241694 16925 ping.go:35] start to check node connectivity
  20. I1008 07:04:53.254327 16925 ping.go:57] ping node: node2 10.0.128.15, count: 5, loss rate 0.00%, average rtt 0.16ms
  21. I1008 07:04:53.354411 16925 ping.go:57] ping node: node1 10.0.129.96, count: 5, loss rate 0.00%, average rtt 15.65ms
  22. I1008 07:04:53.354464 16925 ping.go:57] ping node: node3 10.0.128.35, count: 5, loss rate 0.00%, average rtt 15.71ms
  23. I1008 07:04:53.354492 16925 ping.go:74] start to check pod connectivity
  24. I1008 07:04:57.382791 16925 ping.go:101] ping pod: kube-ovn-pinger-8xhhv 10.16.0.10, count: 5, loss rate 0.00, average rtt 0.16ms
  25. I1008 07:04:57.483725 16925 ping.go:101] ping pod: kube-ovn-pinger-5mq4g 10.16.0.12, count: 5, loss rate 0.00, average rtt 1.74ms
  26. I1008 07:04:57.483750 16925 ping.go:101] ping pod: kube-ovn-pinger-gc8l6 10.16.0.13, count: 5, loss rate 0.00, average rtt 1.81ms
  27. I1008 07:04:57.483813 16925 ping.go:119] start to check dns connectivity
  28. I1008 07:04:57.490402 16925 ping.go:129] resolve dns kubernetes.default.svc.cluster.local to [10.96.0.1] in 6.56ms
  29. ### finish diagnose node node2
  30. ### start to diagnose node node3
  31. I1008 07:04:58.094738 21692 ping.go:139] ovs-vswitchd and ovsdb are up
  32. I1008 07:04:58.176064 21692 ping.go:151] ovn_controller is up
  33. I1008 07:04:58.176096 21692 ping.go:35] start to check node connectivity
  34. I1008 07:05:02.193091 21692 ping.go:57] ping node: node3 10.0.128.35, count: 5, loss rate 0.00%, average rtt 0.21ms
  35. I1008 07:05:02.293256 21692 ping.go:57] ping node: node2 10.0.128.15, count: 5, loss rate 0.00%, average rtt 0.58ms
  36. I1008 07:05:02.293256 21692 ping.go:57] ping node: node1 10.0.129.96, count: 5, loss rate 0.00%, average rtt 0.68ms
  37. I1008 07:05:02.293368 21692 ping.go:74] start to check pod connectivity
  38. I1008 07:05:06.314977 21692 ping.go:101] ping pod: kube-ovn-pinger-gc8l6 10.16.0.13, count: 5, loss rate 0.00, average rtt 0.37ms
  39. I1008 07:05:06.415222 21692 ping.go:101] ping pod: kube-ovn-pinger-5mq4g 10.16.0.12, count: 5, loss rate 0.00, average rtt 0.82ms
  40. I1008 07:05:06.415317 21692 ping.go:101] ping pod: kube-ovn-pinger-8xhhv 10.16.0.10, count: 5, loss rate 0.00, average rtt 0.64ms
  41. I1008 07:05:06.415354 21692 ping.go:119] start to check dns connectivity
  42. I1008 07:05:06.420595 21692 ping.go:129] resolve dns kubernetes.default.svc.cluster.local to [10.96.0.1] in 5.21ms
  43. ### finish diagnose node node3