Antctl

Antctl is the command-line tool for Antrea. At the moment, antctl supports running in two different modes: * “controller mode”: when run out-of-cluster or from within the Antrea Controller Pod, antctl can connect to the Antrea Controller and query information from it (e.g. the set of computed NetworkPolicies). * “agent mode”: when run from within an Antrea Agent Pod, antctl can connect to the Antrea Agent and query information local to that Agent (e.g. the set of computed NetworkPolicies received by that Agent from the Antrea Controller, as opposed to the entire set of computed policies).

Table of Contents

Installation

The antctl binary is included in the Antrea Docker image (antrea/antrea-ubuntu) which means that there is no need to install anything to connect to the Antrea Agent. Simply exec into the antrea-agent container for the appropriate antrea-agent Pod and run antctl:

  1. kubectl exec -it <antrea-agent Pod name> -n kube-system -c antrea-agent bash
  2. > antctl help

Starting with Antrea release v0.5.0, we publish the antctl binaries for different OS / CPU Architecture combinations. Head to the releases page and download the appropriate one for your machine. For example:

On Mac & Linux:

  1. curl -Lo ./antctl "https://github.com/vmware-tanzu/antrea/releases/download/v0.8.2/antctl-$(uname)-x86_64"
  2. chmod +x ./antctl
  3. mv ./antctl /some-dir-in-your-PATH/antctl
  4. antctl version

For Linux, we also publish binaries for Arm-based systems.

On Windows, using PowerShell:

  1. Invoke-WebRequest -Uri https://github.com/vmware-tanzu/antrea/releases/download/v0.8.2/antctl-windows-x86_64.exe -Outfile antctl.exe
  2. Move-Item .\antctl.exe c:\some-dir-in-your-PATH\antctl.exe
  3. antctl version

Usage

To see the list of available commands and options, run antctl help. The list will be different based on whether you are connecting to the Antrea Controller or Agent.

When running out-of-cluster (“controller mode” only), antctl will look for your kubeconfig file at $HOME/.kube/config by default. You can select a different one by setting the KUBECONFIG environment variable or with --kubeconfig (the latter taking precedence over the former).

The following sub-sections introduce a few commands which are useful for troubleshooting the Antrea system.

Collecting support information

Starting with version 0.7.0, Antrea supports the antctl supportbundle command, which can collect information from the cluster, the Antrea Controller and all Antrea agents. This information is useful when trying to troubleshoot issues in Kubernetes clusters using Antrea. In particular, when running the command out-of-cluster, all the information can be collected under one single directory, which you can upload and share when reporting issues on Github. Simply run the command as follows:

  1. antctl supportbundle [-d <TARGET_DIR>]

If you omit to provide a directory, antctl will create one in the current working directory, using the current timestamp as a suffix. The command also provides additional flags to filter the results: run antctl supportbundle --help for the full list.

The collected support bundle will include the following (more information may be included over time): * cluster information: description of the different K8s resources in the cluster (Nodes, Deployments, etc.). * Antrea Controller information: all the available logs (contents will vary based on the verbosity selected when running the controller) and state stored at the controller (e.g. computed NetworkPolicy objects). * Antrea Agent information: all the available logs from the agent and the OVS daemons, network configuration of the Node (e.g. routes, iptables rules, OVS flows) and state stored at the agent (e.g. computed NetworkPolicy objects received from the controller).

Be aware that the generated support bundle includes a lot of information, including logs, so please review the contents of the directory before sharing it on Github and ensure that you do not share anything sensitive.

The antctl supportbundle command can also be run inside a Controller or Agent Pod, in which case only local information will be collected.

controllerinfo and agentinfo commands

antctl controller command get controllerinfo (or get ci) and agent command get agentinfo (or get ai) print the runtime information of antrea-controller and antrea-agent respectively.

  1. antctl get controllerinfo
  2. antctl get agentinfo

NetworkPolicy commands

Both Antrea Controller and Agent support querying NetworkPolicy objects. - antctl get networkpolicy (or get netpol) command can print all NetworkPolicies, a specified NetworkPolicy, or NetworkPolicies in a specified Namespace. - get appliedtogroup (or get atg) command can print all NetworkPolicy AppliedToGroups (AppliedToGroup includes the Pods to which a NetworkPolicy is applied), or a specified AppliedToGroup. - get addressgroup (or get ag) command can print all NetworkPolicy AddressGroups (AddressGroup defines source or destination addresses of NetworkPolicy rules), or a specified AddressGroup.

Using the json or yaml antctl output format can print more information of NetworkPolicy, AppliedToGroup, and AddressGroup, than using the default table output format.

  1. antctl get networkpolicy [name] [-n namespace] [-o yaml]
  2. antctl get appliedtogroup [name] [-o yaml]
  3. antctl get addressgroup [name] [-o yaml]

Antrea Agent additionally supports printing NetworkPolicies applied to a specified local Pod using this antctl command:

  1. antctl get networkpolicy -p pod -n namespace

Dumping Pod network interface information

antctl agent command get podinterface (or get pi) can dump network interface information of all local Pods, or a specified local Pod, or local Pods in the specified Namespace, or local Pods matching the specified Pod name.

  1. antctl get podinterface [name] [-n namespace]

Dumping OVS flows

Starting from version 0.6.0, Antrea Agent supports dumping Antrea OVS flows. The antctl get ovsflows (or get of) command can dump all OVS flows, flows added for a specified Pod, or flows added to realize a specified NetworkPolicy, or flows in a specified OVS flow table.

  1. antctl get ovsflows
  2. antctl get ovsflows -p pod -n namespace
  3. antctl get ovsflows --networkpolicy networkpolicy -n namespace
  4. antctl get ovsflows -T table

An OVS flow table can be specified using the table name or the table number. antctl get ovsflow --help lists all Antrea flow tables. For more information about Antrea OVS pipeline and flows, please refer to the OVS pipeline doc.

Example outputs of dumping Pod and NetworkPolicy OVS flows:

  1. # Dump OVS flows of Pod "coredns-6955765f44-zcbwj"
  2. $ antctl get of -p coredns-6955765f44-zcbwj -n kube-system
  3. FLOW
  4. table=classification, n_packets=513122, n_bytes=42615080, priority=190,in_port="coredns--d0c58e" actions=load:0x2->NXM_NX_REG0[0..15],resubmit(,10)
  5. table=10, n_packets=513122, n_bytes=42615080, priority=200,ip,in_port="coredns--d0c58e",dl_src=52:bd:c6:e0:eb:c1,nw_src=172.100.1.7 actions=resubmit(,30)
  6. table=10, n_packets=0, n_bytes=0, priority=200,arp,in_port="coredns--d0c58e",arp_spa=172.100.1.7,arp_sha=52:bd:c6:e0:eb:c1 actions=resubmit(,20)
  7. table=80, n_packets=556468, n_bytes=166477824, priority=200,dl_dst=52:bd:c6:e0:eb:c1 actions=load:0x5->NXM_NX_REG1[],load:0x1->NXM_NX_REG0[16],resubmit(,90)
  8. table=70, n_packets=0, n_bytes=0, priority=200,ip,dl_dst=aa:bb:cc:dd:ee:ff,nw_dst=172.100.1.7 actions=set_field:62:39:b4:e8:05:76->eth_src,set_field:52:bd:c6:e0:eb:c1->eth_dst,dec_ttl,resubmit(,80)
  9. # Get NetworkPolicies applied to Pod "coredns-6955765f44-zcbwj"
  10. $ antctl get netpol -p coredns-6955765f44-zcbwj -n kube-system
  11. NAMESPACE NAME APPLIED-TO RULES
  12. kube-system kube-dns 160ea6d7-0234-5d1d-8ea0-b703d0aa3b46 1
  13. # Dump OVS flows of NetworkPolicy "kube-dns"
  14. $ antctl get of --networkpolicy kube-dns -n kube-system
  15. FLOW
  16. table=90, n_packets=0, n_bytes=0, priority=190,conj_id=1,ip actions=resubmit(,105)
  17. table=90, n_packets=0, n_bytes=0, priority=200,ip actions=conjunction(1,1/3)
  18. table=90, n_packets=0, n_bytes=0, priority=200,ip,reg1=0x5 actions=conjunction(2,2/3),conjunction(1,2/3)
  19. table=90, n_packets=0, n_bytes=0, priority=200,udp,tp_dst=53 actions=conjunction(1,3/3)
  20. table=90, n_packets=0, n_bytes=0, priority=200,tcp,tp_dst=53 actions=conjunction(1,3/3)
  21. table=90, n_packets=0, n_bytes=0, priority=200,tcp,tp_dst=9153 actions=conjunction(1,3/3)
  22. table=100, n_packets=0, n_bytes=0, priority=200,ip,reg1=0x5 actions=drop

OVS packet tracing

Starting from version 0.7.0, Antrea Agent supports tracing the OVS flows that a specified packet traverses, leveraging the OVS packet tracing tool.

antctl trace-packet command starts a packet tracing operation. antctl help trace-packet shows the usage of the command. This section lists a few trace-packet command examples.

  1. # Trace an IP packet between two Pods
  2. antctl trace-packet -S ns1/pod1 -D ns2/pod2
  3. # Trace a Service request from a local Pod
  4. antctl trace-packet -S ns1/pod1 -D ns2/srv2 -f "tcp,tcp_dst=80"
  5. # Trace the Service reply packet (assuming "ns2/pod2" is the Service backend Pod)
  6. antctl trace-packet -D ns1/pod1 -S ns2/pod2 -f "tcp,tcp_src=80"
  7. # Trace an IP packet from a Pod to gateway port
  8. antctl trace-packet -S ns1/pod1 -D antrea-gw0
  9. # Trace a UDP packet from a Pod to an IP address
  10. antctl trace-packet -S ns1/pod1 -D 10.1.2.3 -f udp,udp_dst=1234
  11. # Trace a UDP packet from an IP address to a Pod
  12. antctl trace-packet -D ns1/pod1 -S 10.1.2.3 -f udp,udp_src=1234
  13. # Trace an ARP request from a local Pod
  14. antctl trace-packet -p ns1/pod1 -f arp,arp_spa=10.1.2.3,arp_sha=00:11:22:33:44:55,arp_tpa=10.1.2.1,dl_dst=ff:ff:ff:ff:ff:ff

Example outputs of tracing a UDP (DNS request) packet from a remote Pod to a local (coredns) Pod:

  1. $ antctl trace-packet -S default/web-client -D kube-system/coredns-6955765f44-zcbwj -f udp,udp_dst=53
  2. result: |
  3. Flow: udp,in_port=1,vlan_tci=0x0000,dl_src=aa:bb:cc:dd:ee:ff,dl_dst=aa:bb:cc:dd:ee:ff,nw_src=172.100.2.11,nw_dst=172.100.1.7,nw_tos=0,nw_ecn=0,nw_ttl=64,tp_src=0,tp_dst=53
  4. bridge("br-int")
  5. ----------------
  6. 0. in_port=1, priority 200, cookie 0x5e000000000000
  7. load:0->NXM_NX_REG0[0..15]
  8. resubmit(,30)
  9. 30. ip, priority 200, cookie 0x5e000000000000
  10. ct(table=31,zone=65520)
  11. drop
  12. -> A clone of the packet is forked to recirculate. The forked pipeline will be resumed at table 31.
  13. -> Sets the packet to an untracked state, and clears all the conntrack fields.
  14. Final flow: unchanged
  15. Megaflow: recirc_id=0,eth,udp,in_port=1,nw_frag=no,tp_src=0x0/0xfc00
  16. Datapath actions: ct(zone=65520),recirc(0x53)
  17. ===============================================================================
  18. recirc(0x53) - resume conntrack with default ct_state=trk|new (use --ct-next to customize)
  19. ===============================================================================
  20. Flow: recirc_id=0x53,ct_state=new|trk,ct_zone=65520,eth,udp,in_port=1,vlan_tci=0x0000,dl_src=aa:bb:cc:dd:ee:ff,dl_dst=aa:bb:cc:dd:ee:ff,nw_src=172.100.2.11,nw_dst=172.100.1.7,nw_tos=0,nw_ecn=0,nw_ttl=64,tp_src=0,tp_dst=53
  21. bridge("br-int")
  22. ----------------
  23. thaw
  24. Resuming from table 31
  25. 31. priority 0, cookie 0x5e000000000000
  26. resubmit(,40)
  27. 40. priority 0, cookie 0x5e000000000000
  28. resubmit(,50)
  29. 50. priority 0, cookie 0x5e000000000000
  30. resubmit(,60)
  31. 60. priority 0, cookie 0x5e000000000000
  32. resubmit(,70)
  33. 70. ip,dl_dst=aa:bb:cc:dd:ee:ff,nw_dst=172.100.1.7, priority 200, cookie 0x5e030000000000
  34. set_field:62:39:b4:e8:05:76->eth_src
  35. set_field:52:bd:c6:e0:eb:c1->eth_dst
  36. dec_ttl
  37. resubmit(,80)
  38. 80. dl_dst=52:bd:c6:e0:eb:c1, priority 200, cookie 0x5e030000000000
  39. load:0x5->NXM_NX_REG1[]
  40. load:0x1->NXM_NX_REG0[16]
  41. resubmit(,90)
  42. 90. conj_id=2,ip, priority 190, cookie 0x5e050000000000
  43. resubmit(,105)
  44. 105. ct_state=+new+trk,ip, priority 190, cookie 0x5e000000000000
  45. ct(commit,table=110,zone=65520)
  46. drop
  47. -> A clone of the packet is forked to recirculate. The forked pipeline will be resumed at table 110.
  48. -> Sets the packet to an untracked state, and clears all the conntrack fields.
  49. Final flow: recirc_id=0x53,eth,udp,reg0=0x10000,reg1=0x5,in_port=1,vlan_tci=0x0000,dl_src=62:39:b4:e8:05:76,dl_dst=52:bd:c6:e0:eb:c1,nw_src=172.100.2.11,nw_dst=172.100.1.7,nw_tos=0,nw_ecn=0,nw_ttl=63,tp_src=0,tp_dst=53
  50. Megaflow: recirc_id=0x53,ct_state=+new-est-inv+trk,ct_mark=0,eth,udp,in_port=1,dl_src=aa:bb:cc:dd:ee:ff,dl_dst=aa:bb:cc:dd:ee:ff,nw_src=192.0.0.0/2,nw_dst=172.100.1.7,nw_ttl=64,nw_frag=no,tp_dst=53
  51. Datapath actions: set(eth(src=62:39:b4:e8:05:76,dst=52:bd:c6:e0:eb:c1)),set(ipv4(ttl=63)),ct(commit,zone=65520),recirc(0x54)
  52. ===============================================================================
  53. recirc(0x54) - resume conntrack with default ct_state=trk|new (use --ct-next to customize)
  54. ===============================================================================
  55. Flow: recirc_id=0x54,ct_state=new|trk,ct_zone=65520,eth,udp,reg0=0x10000,reg1=0x5,in_port=1,vlan_tci=0x0000,dl_src=62:39:b4:e8:05:76,dl_dst=52:bd:c6:e0:eb:c1,nw_src=172.100.2.11,nw_dst=172.100.1.7,nw_tos=0,nw_ecn=0,nw_ttl=63,tp_src=0,tp_dst=53
  56. bridge("br-int")
  57. ----------------
  58. thaw
  59. Resuming from table 110
  60. 110. ip,reg0=0x10000/0x10000, priority 200, cookie 0x5e000000000000
  61. output:NXM_NX_REG1[]
  62. -> output port is 5
  63. Final flow: unchanged
  64. Megaflow: recirc_id=0x54,eth,ip,in_port=1,nw_frag=no
  65. Datapath actions: 3