Network Flow Visibility in Antrea

Table of Contents

Overview

Antrea is a Kubernetes network plugin that provides network connectivity and security features for Pod workloads. Considering the scale and dynamism of Kubernetes workloads in a cluster, Network Flow Visibility helps in the management and configuration of Kubernetes resources such as Network Policy, Services, Pods etc., and thereby provides opportunities to enhance the performance and security aspects of Pod workloads.

For visualizing the network flows, Antrea monitors the flows in Linux conntrack module. These flows are converted to flow records, and then flow records are post-processed before they are sent to the configured external flow collector. High-level design is given below:

Antrea Flow Visibility Design

Flow Exporter

In Antrea, the basic building block for the Network Flow Visibility is the Flow Exporter. Flow Exporter operates within Antrea Agent; it builds and maintains a connection store by polling and dumping flows from conntrack module periodically. Connections from the connection store are exported to the Flow Aggregator Service using the IPFIX protocol, and for this purpose we use the IPFIX exporter process from the go-ipfix library.

Configuration

To enable the Flow Exporter feature at the Antrea Agent, the following config parameters have to be set in the Antrea Agent ConfigMap:

  1. antrea-agent.conf: |
  2. # FeatureGates is a map of feature names to bools that enable or disable experimental features.
  3. featureGates:
  4. # Enable flowexporter which exports polled conntrack connections as IPFIX flow records from each agent to a configured collector.
  5. FlowExporter: true
  6. # Provide the IPFIX collector address as a string with format <HOST>:[<PORT>][:<PROTO>].
  7. # HOST can either be the DNS name, IP, or Service name of the Flow Collector. If
  8. # using an IP, it can be either IPv4 or IPv6. However, IPv6 address should be
  9. # wrapped with []. When the collector is running in-cluster as a Service, set
  10. # <HOST> to <Service namespace>/<Service name>. For example,
  11. # "flow-aggregator/flow-aggregator" can be provided to connect to the Antrea
  12. # Flow Aggregator Service.
  13. # If PORT is empty, we default to 4739, the standard IPFIX port.
  14. # If no PROTO is given, we consider "tls" as default. We support "tls", "tcp" and
  15. # "udp" protocols. "tls" is used for securing communication between flow exporter and
  16. # flow aggregator.
  17. #flowCollectorAddr: "flow-aggregator/flow-aggregator:4739:tls"
  18. # Provide flow poll interval as a duration string. This determines how often the
  19. # flow exporter dumps connections from the conntrack module. Flow poll interval
  20. # should be greater than or equal to 1s (one second).
  21. # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
  22. #flowPollInterval: "5s"
  23. # Provide the active flow export timeout, which is the timeout after which a flow
  24. # record is sent to the collector for active flows. Thus, for flows with a continuous
  25. # stream of packets, a flow record will be exported to the collector once the elapsed
  26. # time since the last export event is equal to the value of this timeout.
  27. # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
  28. #activeFlowExportTimeout: "60s"
  29. # Provide the idle flow export timeout, which is the timeout after which a flow
  30. # record is sent to the collector for idle flows. A flow is considered idle if no
  31. # packet matching this flow has been observed since the last export event.
  32. # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
  33. #idleFlowExportTimeout: "15s"

Please note that the default value for flowCollectorAddr is "flow-aggregator/flow-aggregator:4739:tls", which enables the Flow Exporter to connect the Flow Aggregator Service, assuming it is running in the same K8 cluster with the Name and Namespace set to flow-aggregator. If you deploy the Flow Aggregator Service with a different Name and Namespace, then set flowCollectorAddr appropriately.

Please note that the default values for flowPollInterval, activeFlowExportTimeout, and idleFlowExportTimeout parameters are set to 5s, 60s, and 15s, respectively. TLS communication between the Flow Exporter and the Flow Aggregator is enabled by default. Please modify them as per your requirements.

IPFIX Information Elements (IEs) in a Flow Record

There are 34 IPFIX IEs in each exported flow record, which are defined in the IANA-assigned IE registry, the Reverse IANA-assigned IE registry and the Antrea IE registry. The reverse IEs are used to provide bi-directional information about the flow. The Enterprise ID is 0 for IANA-assigned IE registry, 29305 for reverse IANA IE registry, 56505 for Antrea IE registry. All the IEs used by the Antrea Flow Exporter are listed below:

IEs from IANA-assigned IE Registry

IPFIX Information ElementField IDType
flowStartSeconds150dateTimeSeconds
flowEndSeconds151dateTimeSeconds
flowEndReason136unsigned8
sourceIPv4Address8ipv4Address
destinationIPv4Address12ipv4Address
sourceIPv6Address27ipv6Address
destinationIPv6Address28ipv6Address
sourceTransportPort7unsigned16
destinationTransportPort11unsigned16
protocolIdentifier4unsigned8
packetTotalCount86unsigned64
octetTotalCount85unsigned64
packetDeltaCount2unsigned64
octetDeltaCount1unsigned64

IEs from Reverse IANA-assigned IE Registry

IPFIX Information ElementField IDType
reversePacketTotalCount86unsigned64
reverseOctetTotalCount85unsigned64
reversePacketDeltaCount2unsigned64
reverseOctetDeltaCount1unsigned64

IEs from Antrea IE Registry

IPFIX Information ElementField IDTypeDescription
sourcePodNamespace100string
sourcePodName101string
destinationPodNamespace102string
destinationPodName103string
sourceNodeName104string
destinationNodeName105string
destinationClusterIPv4106ipv4Address
destinationClusterIPv6107ipv6Address
destinationServicePort108unsigned16
destinationServicePortName109string
ingressNetworkPolicyName110stringName of the ingress network policy applied to the destination Pod for this flow.
ingressNetworkPolicyNamespace111stringNamespace of the ingress network policy applied to the destination Pod for this flow.
ingressNetworkPolicyType115unsigned81 stands for Kubernetes Network Policy. 2 stands for Antrea Network Policy. 3 stands for Antrea Cluster Network Policy.
ingressNetworkPolicyRuleName141stringName of the ingress network policy Rule applied to the destination Pod for this flow.
egressNetworkPolicyName112stringName of the egress network policy applied to the source Pod for this flow.
egressNetworkPolicyNamespace113stringNamespace of the egress network policy applied to the source Pod for this flow.
egressNetworkPolicyType118unsigned8
egressNetworkPolicyRuleName142stringName of the egress network policy rule applied to the source Pod for this flow.
ingressNetworkPolicyRuleAction139unsigned81 stands for Allow. 2 stands for Drop. 3 stands for Reject.
egressNetworkPolicyRuleAction140unsigned8
tcpState136stringThe state of the TCP connection. The states are: LISTEN, SYN-SENT, SYN-RECEIVED, ESTABLISHED, FIN-WAIT-1, FIN-WAIT-2, CLOSE-WAIT, CLOSING, LAST-ACK, TIME-WAIT, and CLOSED.
flowType137unsigned81 stands for Intra-Node. 2 stands for Inter-Node. 3 stands for To External. 4 stands for From External.

Supported Capabilities

Types of Flows and Associated Information

Currently, the Flow Exporter feature provides visibility for Pod-to-Pod, Pod-to-Service and Pod-to-External network flows along with the associated statistics such as data throughput (bits per second), packet throughput (packets per second), cumulative byte count and cumulative packet count. Pod-To-Service flow visibility is supported only when Antrea Proxy enabled, which is the case by default starting with Antrea v0.11. In the future, we will enable the support for External-To-Service flows.

Kubernetes information such as Node name, Pod name, Pod Namespace, Service name, NetworkPolicy name and NetworkPolicy Namespace, is added to the flow records. Network Policy Rule Action (Allow, Reject, Drop) is also supported for both Antrea-native NetworkPolicies and K8s NetworkPolicies. For K8s NetworkPolicies, connections dropped due to isolated Pod behavior will be assigned the Drop action. For flow records that are exported from any given Antrea Agent, the Flow Exporter only provides the information of Kubernetes entities that are local to the Antrea Agent. In other words, flow records are only complete for intra-Node flows, but incomplete for inter-Node flows. It is the responsibility of the Flow Aggregator to correlate flows from the source and destination Nodes and produce complete flow records.

Both Flow Exporter and Flow Aggregator are supported in IPv4 clusters, IPv6 clusters and dual-stack clusters.

Connection Metrics

We support following connection metrics as Prometheus metrics that are exposed through Antrea Agent apiserver endpoint: antrea_agent_conntrack_total_connection_count, antrea_agent_conntrack_antrea_connection_count, antrea_agent_denied_connection_count, antrea_agent_conntrack_max_connection_count, and antrea_agent_flow_collector_reconnection_count

Flow Aggregator

Flow Aggregator is deployed as a Kubernetes Service. The main functionality of Flow Aggregator is to store, correlate and aggregate the flow records received from the Flow Exporter of Antrea Agents. More details on the functionality are provided in the Supported Capabilities section.

Flow Aggregator is implemented as IPFIX mediator, which consists of IPFIX Collector Process, IPFIX Intermediate Process and IPFIX Exporter Process. We use the go-ipfix library to implement the Flow Aggregator.

Deployment

To deploy a released version of Flow Aggregator Service, pick a deployment manifest from the list of releases. For any given release <TAG> (e.g. v0.12.0), you can deploy Flow Aggregator as follows:

  1. kubectl apply -f https://github.com/antrea-io/antrea/releases/download/<TAG>/flow-aggregator.yml

To deploy the latest version of Flow Aggregator Service (built from the main branch), use the checked-in deployment yaml:

  1. kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/flow-aggregator.yml

Configuration

The following configuration parameters have to be provided through the Flow Aggregator ConfigMap. Flow aggregator needs to be configured with at least one of the supported Flow Collectors. flowCollector is mandatory for go-ipfix collector, and clickHouse is mandatory for Grafana Flow Collector. We provide an example value for this parameter in the following snippet.

  • If you have deployed the go-ipfix collector, then please set flowCollector.enable to true and use the address for flowCollector.address: <Ipfix-Collector Cluster IP>:<port>:<tcp|udp>
  • If you have deployed the Grafana Flow Collector, then please enable the collector by setting clickHouse.enable to true. If it is deployed following the deployment steps, the ClickHouse server is already exposed via a K8s Service, and no further configuration is required. If a different FQDN or IP is desired, please use the URL for clickHouse.databaseURL in the following format: tcp://<ClickHouse server FQDN or IP>:<ClickHouse TCP port>.
  1. flow-aggregator.conf: |
  2. # Provide the active flow record timeout as a duration string. This determines
  3. # how often the flow aggregator exports the active flow records to the flow
  4. # collector. Thus, for flows with a continuous stream of packets, a flow record
  5. # will be exported to the collector once the elapsed time since the last export
  6. # event in the flow aggregator is equal to the value of this timeout.
  7. # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
  8. activeFlowRecordTimeout: 60s
  9. # Provide the inactive flow record timeout as a duration string. This determines
  10. # how often the flow aggregator exports the inactive flow records to the flow
  11. # collector. A flow record is considered to be inactive if no matching record
  12. # has been received by the flow aggregator in the specified interval.
  13. # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
  14. inactiveFlowRecordTimeout: 90s
  15. # Provide the transport protocol for the flow aggregator collecting process, which is tls, tcp or udp.
  16. aggregatorTransportProtocol: "tls"
  17. # Provide an extra DNS name or IP address of flow aggregator for generating TLS certificate.
  18. flowAggregatorAddress: ""
  19. # recordContents enables configuring some fields in the flow records. Fields can
  20. # be excluded to reduce record size, but some features or external tooling may
  21. # depend on these fields.
  22. recordContents:
  23. # Determine whether source and destination Pod labels will be included in the flow records.
  24. podLabels: false
  25. # apiServer contains APIServer related configuration options.
  26. apiServer:
  27. # The port for the flow-aggregator APIServer to serve on.
  28. apiPort: 10348
  29. # Comma-separated list of Cipher Suites. If omitted, the default Go Cipher Suites will be used.
  30. # https://golang.org/pkg/crypto/tls/#pkg-constants
  31. # Note that TLS1.3 Cipher Suites cannot be added to the list. But the apiserver will always
  32. # prefer TLS1.3 Cipher Suites whenever possible.
  33. tlsCipherSuites: ""
  34. # TLS min version from: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13.
  35. tlsMinVersion: ""
  36. # flowCollector contains external IPFIX or JSON collector related configuration options.
  37. flowCollector:
  38. # Enable is the switch to enable exporting flow records to external flow collector.
  39. enable: false
  40. # Provide the flow collector address as string with format <IP>:<port>[:<proto>], where proto is tcp or udp.
  41. # If no L4 transport proto is given, we consider tcp as default.
  42. address: ""
  43. # Provide the 32-bit Observation Domain ID which will uniquely identify this instance of the flow
  44. # aggregator to an external flow collector. If omitted, an Observation Domain ID will be generated
  45. # from the persistent cluster UUID generated by Antrea. Failing that (e.g. because the cluster UUID
  46. # is not available), a value will be randomly generated, which may vary across restarts of the flow
  47. # aggregator.
  48. #observationDomainID:
  49. # Provide format for records sent to the configured flow collector.
  50. # Supported formats are IPFIX and JSON.
  51. recordFormat: "IPFIX"
  52. # clickHouse contains ClickHouse related configuration options.
  53. clickHouse:
  54. # Enable is the switch to enable exporting flow records to ClickHouse.
  55. enable: false
  56. # Database is the name of database where Antrea "flows" table is created.
  57. database: "default"
  58. # DatabaseURL is the url to the database. TCP protocol is required.
  59. databaseURL: "tcp://clickhouse-clickhouse.flow-visibility.svc:9000"
  60. # Debug enables debug logs from ClickHouse sql driver.
  61. debug: false
  62. # Compress enables lz4 compression when committing flow records.
  63. compress: true
  64. # CommitInterval is the periodical interval between batch commit of flow records to DB.
  65. # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
  66. # The minimum interval is 1s based on ClickHouse documentation for best performance.
  67. commitInterval: "8s"

Please note that the default values for activeFlowRecordTimeout, inactiveFlowRecordTimeout, aggregatorTransportProtocol parameters are set to 60s, 90s and tls respectively. Please make sure that aggregatorTransportProtocol and protocol of flowCollectorAddr in agent-agent.conf are set to tls to guarantee secure communication works properly. Protocol of flowCollectorAddr and aggregatorTransportProtocol must always match, so TLS must either be enabled for both sides or disabled for both sides. Please modify the parameters as per your requirements.

Please note that the default value for recordContents.podLabels is false, which indicates source and destination Pod labels will not be included in the flow records exported to flowCollector and clickHouse. If you would like to include them, you can modify the value to true.

Please note that the default value for apiServer.apiPort is 10348, which is the port used to expose the Flow Aggregator’s APIServer. Please modify the parameters as per your requirements.

Please note that the default value for clickHouse.commitInterval is 8s, which is based on experiment results to achieve best ClickHouse write performance and data retention. Based on ClickHouse recommendation for best performance, this interval is required be no shorter than 1s. Also note that flow aggregator has a cache limit of ~500k records for ClickHouse-Grafana collector. If clickHouse.commitInterval is set to a value too large, there’s a risk of losing records.

IPFIX Information Elements (IEs) in an Aggregated Flow Record

In addition to IPFIX information elements provided in the above section, the Flow Aggregator adds the following fields to the flow records.

IEs from Antrea IE Registry

IPFIX Information ElementField IDTypeDescription
packetTotalCountFromSourceNode120unsigned64The cumulative number of packets for this flow as reported by the source Node, since the flow started.
octetTotalCountFromSourceNode121unsigned64The cumulative number of octets for this flow as reported by the source Node, since the flow started.
packetDeltaCountFromSourceNode122unsigned64The number of packets for this flow as reported by the source Node, since the previous report for this flow at the observation point.
octetDeltaCountFromSourceNode123unsigned64The number of octets for this flow as reported by the source Node, since the previous report for this flow at the observation point.
reversePacketTotalCountFromSourceNode124unsigned64The cumulative number of reverse packets for this flow as reported by the source Node, since the flow started.
reverseOctetTotalCountFromSourceNode125unsigned64The cumulative number of reverse octets for this flow as reported by the source Node, since the flow started.
reversePacketDeltaCountFromSourceNode126unsigned64The number of reverse packets for this flow as reported by the source Node, since the previous report for this flow at the observation point.
reverseOctetDeltaCountFromSourceNode127unsigned64The number of reverse octets for this flow as reported by the source Node, since the previous report for this flow at the observation point.
packetTotalCountFromDestinationNode128unsigned64The cumulative number of packets for this flow as reported by the destination Node, since the flow started.
octetTotalCountFromDestinationNode129unsigned64The cumulative number of octets for this flow as reported by the destination Node, since the flow started.
packetDeltaCountFromDestinationNode130unsigned64The number of packets for this flow as reported by the destination Node, since the previous report for this flow at the observation point.
octetDeltaCountFromDestinationNode131unsigned64The number of octets for this flow as reported by the destination Node, since the previous report for this flow at the observation point.
reversePacketTotalCountFromDestinationNode132unsigned64The cumulative number of reverse packets for this flow as reported by the destination Node, since the flow started.
reverseOctetTotalCountFromDestinationNode133unsigned64The cumulative number of reverse octets for this flow as reported by the destination Node, since the flow started.
reversePacketDeltaCountFromDestinationNode134unsigned64The number of reverse packets for this flow as reported by the destination Node, since the previous report for this flow at the observation point.
reverseOctetDeltaCountFromDestinationNode135unsigned64The number of reverse octets for this flow as reported by the destination Node, since the previous report for this flow at the observation point.
sourcePodLabels143string
destinationPodLabels144string
throughput145unsigned64The average amount of traffic flowing from source to destination, since the previous report for this flow at the observation point. The unit is bits per second.
reverseThroughput146unsigned64The average amount of reverse traffic flowing from destination to source, since the previous report for this flow at the observation point. The unit is bits per second.
throughputFromSourceNode147unsigned64The average amount of traffic flowing from source to destination, since the previous report for this flow at the observation point, based on the records sent from the source Node. The unit is bits per second.
throughputFromDestinationNode148unsigned64The average amount of traffic flowing from source to destination, since the previous report for this flow at the observation point, based on the records sent from the destination Node. The unit is bits per second.
reverseThroughputFromSourceNode149unsigned64The average amount of reverse traffic flowing from destination to source, since the previous report for this flow at the observation point, based on the records sent from the source Node. The unit is bits per second.
reverseThroughputFromDestinationNode150unsigned64The average amount of reverse traffic flowing from destination to source, since the previous report for this flow at the observation point, based on the records sent from the destination Node. The unit is bits per second.
flowEndSecondsFromSourceNode151unsigned32The absolute timestamp of the last packet of this flow, based on the records sent from the source Node. The unit is seconds.
flowEndSecondsFromDestinationNode152unsigned32The absolute timestamp of the last packet of this flow, based on the records sent from the destination Node. The unit is seconds.

Supported Capabilities

Storage of Flow Records

Flow Aggregator stores the received flow records from Antrea Agents in a hash map, where the flow key is 5-tuple of a network connection. 5-tuple consists of Source IP, Destination IP, Source Port, Destination Port and Transport protocol. Therefore, Flow Aggregator maintains one flow record for any given connection, and this flow record gets updated till the connection in the Kubernetes cluster becomes invalid.

Correlation of Flow Records

In the case of inter-Node flows, there are two flow records, one from the source Node, where the flow originates from, and another one from the destination Node, where the destination Pod resides. Both the flow records contain incomplete information as mentioned here. Flow Aggregator provides support for the correlation of the flow records from the source Node and the destination Node, and it exports a single flow record with complete information for both inter-Node and intra-Node flows.

Aggregation of Flow Records

Flow Aggregator aggregates the flow records that belong to a single connection. As part of aggregation, fields such as flow timestamps, flow statistics etc. are updated. For the purpose of updating flow statistics fields, Flow Aggregator introduces the new fields in Antrea Enterprise IPFIX registry corresponding to the Source Node and Destination Node, so that flow statistics from different Nodes can be preserved.

Antctl Support

antctl can access the Flow Aggregator API to dump flow records and print metrics about flow record processing. Refer to the antctl documentation for more information.

Quick Deployment

If you would like to quickly try Network Flow Visibility feature, you can deploy Antrea, the Flow Aggregator Service, the Grafana Flow Collector on the Vagrant setup.

Image-building Steps

Build required image under antrea by using make command:

  1. make
  2. make flow-aggregator-image

Deployment Steps

Given any external IPFIX flow collector, you can deploy Antrea and the Flow Aggregator Service on a default Vagrant setup by running the following commands:

  1. ./infra/vagrant/provision.sh
  2. ./infra/vagrant/push_antrea.sh --flow-collector <externalFlowCollectorAddress>

If you would like to deploy the Grafana Flow Collector, you can run the following command:

  1. ./infra/vagrant/provision.sh
  2. ./infra/vagrant/push_antrea.sh --flow-collector Grafana

Flow Collectors

Here we list two choices the external configured flow collector: go-ipfix collector and Grafana flow collector. For each collector, we introduce how to deploy it and how to output or visualize the collected flow records information.

Go-ipfix Collector

Deployment Steps

The go-ipfix collector can be built from go-ipfix library. It is used to collect, decode and log the IPFIX records.

  • To deploy a released version of the go-ipfix collector, please choose one deployment manifest from the list of releases (supported after v0.5.2). For any given release (e.g. v0.5.2), you can deploy the collector as follows:
  1. kubectl apply -f https://github.com/vmware/go-ipfix/releases/download/<TAG>/ipfix-collector.yaml
  • To deploy the latest version of the go-ipfix collector (built from the main branch), use the checked-in deployment manifest:
  1. kubectl apply -f https://raw.githubusercontent.com/vmware/go-ipfix/main/build/yamls/ipfix-collector.yaml

Go-ipfix collector also supports customization on its parameters: port and protocol. Please follow the go-ipfix documentation to configure those parameters if needed.

Output Flow Records

To output the flow records collected by the go-ipfix collector, use the command below:

  1. kubectl logs <ipfix-collector-pod-name> -n ipfix

Grafana Flow Collector (migrated)

Starting with Antrea v1.8, support for the Grafana Flow Collector has been migrated to Theia.

The Grafana Flow Collector was added in Antrea v1.6.0. In Antrea v1.7.0, we start to move the network observability and analytics functionalities of Antrea to Project Theia, including the Grafana Flow Collector. Going forward, further development of the Grafana Flow Collector will be in the Theia repo. For the up-to-date version of Grafana Flow Collector and other Theia features, please refer to the Theia document.

ELK Flow Collector (removed)

Starting with Antrea v1.7, support for the ELK Flow Collector has been removed. Please consider using the Grafana Flow Collector instead, which is actively maintained.