OVN-Kubernetes architecture

Introduction to OVN-Kubernetes architecture

The following diagram shows the OVN-Kubernetes architecture.

OVN-Kubernetes architecture

Figure 1. OVK-Kubernetes architecture

The key components are:

  • Cloud Management System (CMS) - A platform specific client for OVN that provides a CMS specific plugin for OVN integration. The plugin translates the cloud management system’s concept of the logical network configuration, stored in the CMS configuration database in a CMS-specific format, into an intermediate representation understood by OVN.

  • OVN Northbound database (nbdb) - Stores the logical network configuration passed by the CMS plugin.

  • OVN Southbound database (sbdb) - Stores the physical and logical network configuration state for OpenVswitch (OVS) system on each node, including tables that bind them.

  • ovn-northd - This is the intermediary client between nbdb and sbdb. It translates the logical network configuration in terms of conventional network concepts, taken from the nbdb, into logical data path flows in the sbdb below it. The container name is northd and it runs in the ovnkube-master pods.

  • ovn-controller - This is the OVN agent that interacts with OVS and hypervisors, for any information or update that is needed for sbdb. The ovn-controller reads logical flows from the sbdb, translates them into OpenFlow flows and sends them to the node’s OVS daemon. The container name is ovn-controller and it runs in the ovnkube-node pods.

The OVN northbound database has the logical network configuration passed down to it by the cloud management system (CMS). The OVN northbound Database contains the current desired state of the network, presented as a collection of logical ports, logical switches, logical routers, and more. The ovn-northd (northd container) connects to the OVN northbound database and the OVN southbound database. It translates the logical network configuration in terms of conventional network concepts, taken from the OVN northbound Database, into logical data path flows in the OVN southbound database.

The OVN southbound database has physical and logical representations of the network and binding tables that link them together. Every node in the cluster is represented in the southbound database, and you can see the ports that are connected to it. It also contains all the logic flows, the logic flows are shared with the ovn-controller process that runs on each node and the ovn-controller turns those into OpenFlow rules to program Open vSwitch.

The Kubernetes control plane nodes each contain an ovnkube-master pod which hosts containers for the OVN northbound and southbound databases. All OVN northbound databases form a Raft cluster and all southbound databases form a separate Raft cluster. At any given time a single ovnkube-master is the leader and the other ovnkube-master pods are followers.

Listing all resources in the OVN-Kubernetes project

Finding the resources and containers that run in the OVN-Kubernetes project is important to help you understand the OVN-Kubernetes networking implementation.

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.

  • The OpenShift CLI (oc) installed.

Procedure

  1. Run the following command to get all resources, endpoints, and ConfigMaps in the OVN-Kubernetes project:

    1. $ oc get all,ep,cm -n openshift-ovn-kubernetes

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. pod/ovnkube-master-9g7zt 6/6 Running 1 (48m ago) 57m
    3. pod/ovnkube-master-lqs4v 6/6 Running 0 57m
    4. pod/ovnkube-master-vxhtq 6/6 Running 0 57m
    5. pod/ovnkube-node-9k9kc 5/5 Running 0 57m
    6. pod/ovnkube-node-jg52r 5/5 Running 0 51m
    7. pod/ovnkube-node-k8wf7 5/5 Running 0 57m
    8. pod/ovnkube-node-tlwk6 5/5 Running 0 47m
    9. pod/ovnkube-node-xsvnk 5/5 Running 0 57m
    10. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    11. service/ovn-kubernetes-master ClusterIP None <none> 9102/TCP 57m
    12. service/ovn-kubernetes-node ClusterIP None <none> 9103/TCP,9105/TCP 57m
    13. service/ovnkube-db ClusterIP None <none> 9641/TCP,9642/TCP 57m
    14. NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
    15. daemonset.apps/ovnkube-master 3 3 3 3 3 beta.kubernetes.io/os=linux,node-role.kubernetes.io/master= 57m
    16. daemonset.apps/ovnkube-node 5 5 5 5 5 beta.kubernetes.io/os=linux 57m
    17. NAME ENDPOINTS AGE
    18. endpoints/ovn-kubernetes-master 10.0.132.11:9102,10.0.151.18:9102,10.0.192.45:9102 57m
    19. endpoints/ovn-kubernetes-node 10.0.132.11:9105,10.0.143.72:9105,10.0.151.18:9105 + 7 more... 57m
    20. endpoints/ovnkube-db 10.0.132.11:9642,10.0.151.18:9642,10.0.192.45:9642 + 3 more... 57m
    21. NAME DATA AGE
    22. configmap/control-plane-status 1 55m
    23. configmap/kube-root-ca.crt 1 57m
    24. configmap/openshift-service-ca.crt 1 57m
    25. configmap/ovn-ca 1 57m
    26. configmap/ovn-kubernetes-master 0 55m
    27. configmap/ovnkube-config 1 57m
    28. configmap/signer-ca 1 57m

    There are three ovnkube-masters that run on the control plane nodes, and two daemon sets used to deploy the ovnkube-master and ovnkube-node pods. There is one ovnkube-node pod for each node in the cluster. In this example, there are 5, and since there is one ovnkube-node per node in the cluster, there are five nodes in the cluster. The ovnkube-config ConfigMap has the OKD OVN-Kubernetes configurations started by online-master and ovnkube-node. The ovn-kubernetes-master ConfigMap has the information of the current online master leader.

  2. List all the containers in the ovnkube-master pods by running the following command:

    1. $ oc get pods ovnkube-master-9g7zt \
    2. -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes

    Expected output

    1. northd nbdb kube-rbac-proxy sbdb ovnkube-master ovn-dbchecker

    The ovnkube-master pod is made up of several containers. It is responsible for hosting the northbound database (nbdb container), the southbound database (sbdb container), watching for cluster events for pods, egressIP, namespaces, services, endpoints, egress firewall, and network policy and writing them to the northbound database (ovnkube-master pod), as well as managing pod subnet allocation to nodes.

  3. List all the containers in the ovnkube-node pods by running the following command:

    1. $ oc get pods ovnkube-node-jg52r \
    2. -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes

    Expected output

    1. ovn-controller ovn-acl-logging kube-rbac-proxy kube-rbac-proxy-ovn-metrics ovnkube-node

    The ovnkube-node pod has a container (ovn-controller) that resides on each OKD node. Each node’s ovn-controller connects the OVN northbound to the OVN southbound database to learn about the OVN configuration. The ovn-controller connects southbound to ovs-vswitchd as an OpenFlow controller, for control over network traffic, and to the local ovsdb-server to allow it to monitor and control Open vSwitch configuration.

Listing the OVN-Kubernetes northbound database contents

To understand logic flow rules you need to examine the northbound database and understand what objects are there to see how they are translated into logic flow rules. The up to date information is present on the OVN Raft leader and this procedure describes how to find the Raft leader and subsequently query it to list the OVN northbound database contents.

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.

  • The OpenShift CLI (oc) installed.

Procedure

  1. Find the OVN Raft leader for the northbound database.

    The Raft leader stores the most up to date information.

    1. List the pods by running the following command:

      1. $ oc get po -n openshift-ovn-kubernetes

      Example output

      1. NAME READY STATUS RESTARTS AGE
      2. ovnkube-master-7j97q 6/6 Running 2 (148m ago) 149m
      3. ovnkube-master-gt4ms 6/6 Running 1 (140m ago) 147m
      4. ovnkube-master-mk6p6 6/6 Running 0 148m
      5. ovnkube-node-8qvtr 5/5 Running 0 149m
      6. ovnkube-node-fqdc9 5/5 Running 0 149m
      7. ovnkube-node-tlfwv 5/5 Running 0 149m
      8. ovnkube-node-wlwkn 5/5 Running 0 142m
    2. Choose one of the master pods at random and run the following command:

      1. $ oc exec -n openshift-ovn-kubernetes ovnkube-master-7j97q \
      2. -- /usr/bin/ovn-appctl -t /var/run/ovn/ovnnb_db.ctl \
      3. --timeout=3 cluster/status OVN_Northbound

      Example output

      1. Defaulted container "northd" out of: northd, nbdb, kube-rbac-proxy, sbdb, ovnkube-master, ovn-dbchecker
      2. 1c57
      3. Name: OVN_Northbound
      4. Cluster ID: c48a (c48aa5c0-a704-4c77-a066-24fe99d9b338)
      5. Server ID: 1c57 (1c57b6fc-2849-49b7-8679-fbf18bafe339)
      6. Address: ssl:10.0.147.219:9643
      7. Status: cluster member
      8. Role: follower (1)
      9. Term: 5
      10. Leader: 2b4f (2)
      11. Vote: unknown
      12. Election timer: 10000
      13. Log: [2, 3018]
      14. Entries not yet committed: 0
      15. Entries not yet applied: 0
      16. Connections: ->0000 ->0000 <-8844 <-2b4f
      17. Disconnections: 0
      18. Servers:
      19. 1c57 (1c57 at ssl:10.0.147.219:9643) (self)
      20. 8844 (8844 at ssl:10.0.163.212:9643) last msg 8928047 ms ago
      21. 2b4f (2b4f at ssl:10.0.242.240:9643) last msg 620 ms ago (3)
      1This pod is identified as a follower
      2The leader is identified as 2b4f
      3The 2b4f is on IP address 10.0.242.240
    3. Find the ovnkube-master pod running on IP Address 10.0.242.240 using the following command:

      1. $ oc get po -o wide -n openshift-ovn-kubernetes | grep 10.0.242.240 | grep -v ovnkube-node

      Example output

      1. ovnkube-master-gt4ms 6/6 Running 1 (143m ago) 150m 10.0.242.240 ip-10-0-242-240.ec2.internal <none> <none>

      The ovnkube-master-gt4ms pod runs on IP Address 10.0.242.240.

  2. Run the following command to show all the objects in the northbound database:

    1. $ oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms \
    2. -c northd -- ovn-nbctl show

    The output is too long to list here. The list includes the NAT rules, logical switches, load balancers and so on.

    Run the following command to display the options available with the command ovn-nbctl:

    1. $ oc exec -n openshift-ovn-kubernetes -it ovnkube-master-mk6p6 \
    2. -c northd ovn-nbctl --help

    You can narrow down and focus on specific components by using some of the following commands:

  3. Run the following command to show the list of logical routers:

    1. $ oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms \
    2. -c northd -- ovn-nbctl lr-list

    Example output

    1. f971f1f3-5112-402f-9d1e-48f1d091ff04 (GR_ip-10-0-145-205.ec2.internal)
    2. 69c992d8-a4cf-429e-81a3-5361209ffe44 (GR_ip-10-0-147-219.ec2.internal)
    3. 7d164271-af9e-4283-b84a-48f2a44851cd (GR_ip-10-0-163-212.ec2.internal)
    4. 111052e3-c395-408b-97b2-8dd0a20a29a5 (GR_ip-10-0-165-9.ec2.internal)
    5. ed50ce33-df5d-48e8-8862-2df6a59169a0 (GR_ip-10-0-209-170.ec2.internal)
    6. f44e2a96-8d1e-4a4d-abae-ed8728ac6851 (GR_ip-10-0-242-240.ec2.internal)
    7. ef3d0057-e557-4b1a-b3c6-fcc3463790b0 (ovn_cluster_router)

    From this output you can see there is router on each node plus an ovn_cluster_router.

  4. Run the following command to show the list of logical switches:

    1. $ oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms \
    2. -c northd -- ovn-nbctl ls-list

    Example output

    1. 82808c5c-b3bc-414a-bb59-8fec4b07eb14 (ext_ip-10-0-145-205.ec2.internal)
    2. 3d22444f-0272-4c51-afc6-de9e03db3291 (ext_ip-10-0-147-219.ec2.internal)
    3. bf73b9df-59ab-4c58-a456-ce8205b34ac5 (ext_ip-10-0-163-212.ec2.internal)
    4. bee1e8d0-ec87-45eb-b98b-63f9ec213e5e (ext_ip-10-0-165-9.ec2.internal)
    5. 812f08f2-6476-4abf-9a78-635f8516f95e (ext_ip-10-0-209-170.ec2.internal)
    6. f65e710b-32f9-482b-8eab-8d96a44799c1 (ext_ip-10-0-242-240.ec2.internal)
    7. 84dad700-afb8-4129-86f9-923a1ddeace9 (ip-10-0-145-205.ec2.internal)
    8. 1b7b448b-e36c-4ca3-9f38-4a2cf6814bfd (ip-10-0-147-219.ec2.internal)
    9. d92d1f56-2606-4f23-8b6a-4396a78951de (ip-10-0-163-212.ec2.internal)
    10. 6864a6b2-de15-4de3-92d8-f95014b6f28f (ip-10-0-165-9.ec2.internal)
    11. c26bf618-4d7e-4afd-804f-1a2cbc96ec6d (ip-10-0-209-170.ec2.internal)
    12. ab9a4526-44ed-4f82-ae1c-e20da04947d9 (ip-10-0-242-240.ec2.internal)
    13. a8588aba-21da-4276-ba0f-9d68e88911f0 (join)

    From this output you can see there is an ext switch for each node plus switches with the node name itself and a join switch.

  5. Run the following command to show the list of load balancers:

    1. $ oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms \
    2. -c northd -- ovn-nbctl lb-list

    Example output

    1. UUID LB PROTO VIP IPs
    2. f0fb50f9-4968-4b55-908c-616bae4db0a2 Service_default/ tcp 172.30.0.1:443 10.0.147.219:6443,10.0.163.212:6443,169.254.169.2:6443
    3. 0dc42012-4f5b-432e-ae01-2cc4bfe81b00 Service_default/ tcp 172.30.0.1:443 10.0.147.219:6443,169.254.169.2:6443,10.0.242.240:6443
    4. f7fff5d5-5eff-4a40-98b1-3a4ba8f7f69c Service_default/ tcp 172.30.0.1:443 169.254.169.2:6443,10.0.163.212:6443,10.0.242.240:6443
    5. 12fe57a0-50a4-4a1b-ac10-5f288badee07 Service_default/ tcp 172.30.0.1:443 10.0.147.219:6443,10.0.163.212:6443,10.0.242.240:6443
    6. 3f137fbf-0b78-4875-ba44-fbf89f254cf7 Service_openshif tcp 172.30.23.153:443 10.130.0.14:8443
    7. 174199fe-0562-4141-b410-12094db922a7 Service_openshif tcp 172.30.69.51:50051 10.130.0.84:50051
    8. 5ee2d4bd-c9e2-4d16-a6df-f54cd17c9ac3 Service_openshif tcp 172.30.143.87:9001 10.0.145.205:9001,10.0.147.219:9001,10.0.163.212:9001,10.0.165.9:9001,10.0.209.170:9001,10.0.242.240:9001
    9. a056ae3d-83f8-45bc-9c80-ef89bce7b162 Service_openshif tcp 172.30.164.74:443 10.0.147.219:6443,10.0.163.212:6443,10.0.242.240:6443
    10. bac51f3d-9a6f-4f5e-ac02-28fd343a332a Service_openshif tcp 172.30.0.10:53 10.131.0.6:5353
    11. tcp 172.30.0.10:9154 10.131.0.6:9154
    12. 48105bbc-51d7-4178-b975-417433f9c20a Service_openshif tcp 172.30.26.159:2379 10.0.147.219:2379,169.254.169.2:2379,10.0.242.240:2379
    13. tcp 172.30.26.159:9979 10.0.147.219:9979,169.254.169.2:9979,10.0.242.240:9979
    14. 7de2b8fc-342a-415f-ac13-1a493f4e39c0 Service_openshif tcp 172.30.53.219:443 10.128.0.7:8443
    15. tcp 172.30.53.219:9192 10.128.0.7:9192
    16. 2cef36bc-d720-4afb-8d95-9350eff1d27a Service_openshif tcp 172.30.81.66:443 10.128.0.23:8443
    17. 365cb6fb-e15e-45a4-a55b-21868b3cf513 Service_openshif tcp 172.30.96.51:50051 10.130.0.19:50051
    18. 41691cbb-ec55-4cdb-8431-afce679c5e8d Service_openshif tcp 172.30.98.218:9099 169.254.169.2:9099
    19. 82df10ba-8143-400b-977a-8f5f416a4541 Service_openshif tcp 172.30.26.159:2379 10.0.147.219:2379,10.0.163.212:2379,169.254.169.2:2379
    20. tcp 172.30.26.159:9979 10.0.147.219:9979,10.0.163.212:9979,169.254.169.2:9979
    21. debe7f3a-39a8-490e-bc0a-ebbfafdffb16 Service_openshif tcp 172.30.23.244:443 10.128.0.48:8443,10.129.0.27:8443,10.130.0.45:8443
    22. 8a749239-02d9-4dc2-8737-716528e0da7b Service_openshif tcp 172.30.124.255:8443 10.128.0.14:8443
    23. 880c7c78-c790-403d-a3cb-9f06592717a3 Service_openshif tcp 172.30.0.10:53 10.130.0.20:5353
    24. tcp 172.30.0.10:9154 10.130.0.20:9154
    25. d2f39078-6751-4311-a161-815bbaf7f9c7 Service_openshif tcp 172.30.26.159:2379 169.254.169.2:2379,10.0.163.212:2379,10.0.242.240:2379
    26. tcp 172.30.26.159:9979 169.254.169.2:9979,10.0.163.212:9979,10.0.242.240:9979
    27. 30948278-602b-455c-934a-28e64c46de12 Service_openshif tcp 172.30.157.35:9443 10.130.0.43:9443
    28. 2cc7e376-7c02-4a82-89e8-dfa1e23fb003 Service_openshif tcp 172.30.159.212:17698 10.128.0.48:17698,10.129.0.27:17698,10.130.0.45:17698
    29. e7d22d35-61c2-40c2-bc30-265cff8ed18d Service_openshif tcp 172.30.143.87:9001 10.0.145.205:9001,10.0.147.219:9001,10.0.163.212:9001,10.0.165.9:9001,10.0.209.170:9001,169.254.169.2:9001
    30. 75164e75-e0c5-40fb-9636-bfdbf4223a02 Service_openshif tcp 172.30.150.68:1936 10.129.4.8:1936,10.131.0.10:1936
    31. tcp 172.30.150.68:443 10.129.4.8:443,10.131.0.10:443
    32. tcp 172.30.150.68:80 10.129.4.8:80,10.131.0.10:80
    33. 7bc4ee74-dccf-47e9-9149-b011f09aff39 Service_openshif tcp 172.30.164.74:443 10.0.147.219:6443,10.0.163.212:6443,169.254.169.2:6443
    34. 0db59e74-1cc6-470c-bf44-57c520e0aa8f Service_openshif tcp 10.0.163.212:31460
    35. tcp 10.0.163.212:32361
    36. c300e134-018c-49af-9f84-9deb1d0715f8 Service_openshif tcp 172.30.42.244:50051 10.130.0.47:50051
    37. 5e352773-429b-4881-afb3-a13b7ba8b081 Service_openshif tcp 172.30.244.66:443 10.129.0.8:8443,10.130.0.8:8443
    38. 54b82d32-1939-4465-a87d-f26321442a7a Service_openshif tcp 172.30.12.9:8443 10.128.0.35:8443

    From this truncated output you can see there are many OVN-Kubernetes load balancers. Load balancers in OVN-Kubernetes are representations of services.

Command line arguments for ovn-nbctl to examine northbound database contents

The following table describes the command line arguments that can be used with ovn-nbctl to examine the contents of the northbound database.

Table 1. Command line arguments to examine northbound database contents
ArgumentDescription

ovn-nbctl show

An overview of the northbound database contents.

ovn-nbctl show <switch_or_router>

Show the details associated with the specified switch or router.

ovn-nbctl lr-list

Show the logical routers.

ovn-nbctl lrp-list <router>

Using the router information from ovn-nbctl lr-list to show the router ports.

ovn-nbctl lr-nat-list <router>

Show network address translation details for the specified router.

ovn-nbctl ls-list

Show the logical switches

ovn-nbctl lsp-list <switch>

Using the switch information from ovn-nbctl ls-list to show the switch port.

ovn-nbctl lsp-get-type <port>

Get the type for the logical port.

ovn-nbctl lb-list

Show the load balancers.

Listing the OVN-Kubernetes southbound database contents

Logic flow rules are stored in the southbound database that is a representation of your infrastructure. The up to date information is present on the OVN Raft leader and this procedure describes how to find the Raft leader and query it to list the OVN southbound database contents.

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.

  • The OpenShift CLI (oc) installed.

Procedure

  1. Find the OVN Raft leader for the southbound database.

    The Raft leader stores the most up to date information.

    1. List the pods by running the following command:

      1. $ oc get po -n openshift-ovn-kubernetes

      Example output

      1. NAME READY STATUS RESTARTS AGE
      2. ovnkube-master-7j97q 6/6 Running 2 (134m ago) 135m
      3. ovnkube-master-gt4ms 6/6 Running 1 (126m ago) 133m
      4. ovnkube-master-mk6p6 6/6 Running 0 134m
      5. ovnkube-node-8qvtr 5/5 Running 0 135m
      6. ovnkube-node-bqztb 5/5 Running 0 117m
      7. ovnkube-node-fqdc9 5/5 Running 0 135m
      8. ovnkube-node-tlfwv 5/5 Running 0 135m
      9. ovnkube-node-wlwkn 5/5 Running 0 128m
    2. Choose one of the master pods at random and run the following command to find the OVN southbound Raft leader:

      1. $ oc exec -n openshift-ovn-kubernetes ovnkube-master-7j97q \
      2. -- /usr/bin/ovn-appctl -t /var/run/ovn/ovnsb_db.ctl \
      3. --timeout=3 cluster/status OVN_Southbound

      Example output

      1. Defaulted container "northd" out of: northd, nbdb, kube-rbac-proxy, sbdb, ovnkube-master, ovn-dbchecker
      2. 1930
      3. Name: OVN_Southbound
      4. Cluster ID: f772 (f77273c0-7986-42dd-bd3c-a9f18e25701f)
      5. Server ID: 1930 (1930f4b7-314b-406f-9dcb-b81fe2729ae1)
      6. Address: ssl:10.0.147.219:9644
      7. Status: cluster member
      8. Role: follower (1)
      9. Term: 3
      10. Leader: 7081 (2)
      11. Vote: unknown
      12. Election timer: 16000
      13. Log: [2, 2423]
      14. Entries not yet committed: 0
      15. Entries not yet applied: 0
      16. Connections: ->0000 ->7145 <-7081 <-7145
      17. Disconnections: 0
      18. Servers:
      19. 7081 (7081 at ssl:10.0.163.212:9644) last msg 59 ms ago (3)
      20. 1930 (1930 at ssl:10.0.147.219:9644) (self)
      21. 7145 (7145 at ssl:10.0.242.240:9644) last msg 7871735 ms ago
      1This pod is identified as a follower
      2The leader is identified as 7081
      3The 7081 is on IP address 10.0.163.212
    3. Find the ovnkube-master pod running on IP Address 10.0.163.212 using the following command:

      1. $ oc get po -o wide -n openshift-ovn-kubernetes | grep 10.0.163.212 | grep -v ovnkube-node

      Example output

      1. ovnkube-master-mk6p6 6/6 Running 0 136m 10.0.163.212 ip-10-0-163-212.ec2.internal <none> <none>

      The ovnkube-master-mk6p6 pod runs on IP Address 10.0.163.212.

  2. Run the following command to show all the information stored in the southbound database:

    1. $ oc exec -n openshift-ovn-kubernetes -it ovnkube-master-mk6p6 \
    2. -c northd -- ovn-sbctl show

    Example output

    1. Chassis "8ca57b28-9834-45f0-99b0-96486c22e1be"
    2. hostname: ip-10-0-156-16.ec2.internal
    3. Encap geneve
    4. ip: "10.0.156.16"
    5. options: {csum="true"}
    6. Port_Binding k8s-ip-10-0-156-16.ec2.internal
    7. Port_Binding etor-GR_ip-10-0-156-16.ec2.internal
    8. Port_Binding jtor-GR_ip-10-0-156-16.ec2.internal
    9. Port_Binding openshift-ingress-canary_ingress-canary-hsblx
    10. Port_Binding rtoj-GR_ip-10-0-156-16.ec2.internal
    11. Port_Binding openshift-monitoring_prometheus-adapter-658fc5967-9l46x
    12. Port_Binding rtoe-GR_ip-10-0-156-16.ec2.internal
    13. Port_Binding openshift-multus_network-metrics-daemon-77nvz
    14. Port_Binding openshift-ingress_router-default-64fd8c67c7-df598
    15. Port_Binding openshift-dns_dns-default-ttpcq
    16. Port_Binding openshift-monitoring_alertmanager-main-0
    17. Port_Binding openshift-e2e-loki_loki-promtail-g2pbh
    18. Port_Binding openshift-network-diagnostics_network-check-target-m6tn4
    19. Port_Binding openshift-monitoring_thanos-querier-75b5cf8dcb-qf8qj
    20. Port_Binding cr-rtos-ip-10-0-156-16.ec2.internal
    21. Port_Binding openshift-image-registry_image-registry-7b7bc44566-mp9b8

    This detailed output shows the chassis and the ports that are attached to the chassis which in this case are all of the router ports and anything that runs like host networking. Any pods communicate out to the wider network using source network address translation (SNAT). Their IP address is translated into the IP address of the node that the pod is running on and then sent out into the network.

    In addition to the chassis information the southbound database has all the logic flows and those logic flows are then sent to the ovn-controller running on each of the nodes. The ovn-controller translates the logic flows into open flow rules and ultimately programs OpenvSwitch so that your pods can then follow open flow rules and make it out of the network.

    Run the following command to display the options available with the command ovn-sbctl:

    1. $ oc exec -n openshift-ovn-kubernetes -it ovnkube-master-mk6p6 \
    2. -c northd -- ovn-sbctl --help

Command line arguments for ovn-sbctl to examine southbound database contents

The following table describes the command line arguments that can be used with ovn-sbctl to examine the contents of the southbound database.

Table 2. Command line arguments to examine southbound database contents
ArgumentDescription

ovn-sbctl show

Overview of the southbound database contents.

ovn-sbctl list Port_Binding <port>

List the contents of southbound database for a the specified port .

ovn-sbctl dump-flows

List the logical flows.

OVN-Kubernetes logical architecture

OVN is a network virtualization solution. It creates logical switches and routers. These switches and routers are interconnected to create any network topologies. When you run ovnkube-trace with the log level set to 2 or 5 the OVN-Kubernetes logical components are exposed. The following diagram shows how the routers and switches are connected in OKD.

OVN-Kubernetes logical architecture

Figure 2. OVN-Kubernetes router and switch components

The key components involved in packet processing are:

Gateway routers

Gateway routers sometimes called L3 gateway routers, are typically used between the distributed routers and the physical network. Gateway routers including their logical patch ports are bound to a physical location (not distributed), or chassis. The patch ports on this router are known as l3gateway ports in the ovn-southbound database (ovn-sbdb).

Distributed logical routers

Distributed logical routers and the logical switches behind them, to which virtual machines and containers attach, effectively reside on each hypervisor.

Join local switch

Join local switches are used to connect the distributed router and gateway routers. It reduces the number of IP addresses needed on the distributed router.

Logical switches with patch ports

Logical switches with patch ports are used to virtualize the network stack. They connect remote logical ports through tunnels.

Logical switches with localnet ports

Logical switches with localnet ports are used to connect OVN to the physical network. They connect remote logical ports by bridging the packets to directly connected physical L2 segments using localnet ports.

Patch ports

Patch ports represent connectivity between logical switches and logical routers and between peer logical routers. A single connection has a pair of patch ports at each such point of connectivity, one on each side.

l3gateway ports

l3gateway ports are the port binding entries in the ovn-sbdb for logical patch ports used in the gateway routers. They are called l3gateway ports rather than patch ports just to portray the fact that these ports are bound to a chassis just like the gateway router itself.

localnet ports

localnet ports are present on the bridged logical switches that allows a connection to a locally accessible network from each ovn-controller instance. This helps model the direct connectivity to the physical network from the logical switches. A logical switch can only have a single localnet port attached to it.

Installing network-tools on local host

Install network-tools on your local host to make a collection of tools available for debugging OKD cluster network issues.

Procedure

  1. Clone the network-tools repository onto your workstation with the following command:

    1. $ git clone git@github.com:openshift/network-tools.git
  2. Change into the directory for the repository you just cloned:

    1. $ cd network-tools
  3. Optional: List all available commands:

    1. $ ./debug-scripts/network-tools -h

Running network-tools

Get information about the logical switches and routers by running network-tools.

Prerequisites

  • You installed the OpenShift CLI (oc).

  • You are logged in to the cluster as a user with cluster-admin privileges.

  • You have installed network-tools on local host.

Procedure

  1. List the routers by running the following command:

    1. $ ./debug-scripts/network-tools ovn-db-run-command ovn-nbctl lr-list

    Example output

    1. Leader pod is ovnkube-master-vslqm
    2. 5351ddd1-f181-4e77-afc6-b48b0a9df953 (GR_helix13.lab.eng.tlv2.redhat.com)
    3. ccf9349e-1948-4df8-954e-39fb0c2d4d06 (GR_helix14.lab.eng.tlv2.redhat.com)
    4. e426b918-75a8-4220-9e76-20b7758f92b7 (GR_hlxcl7-master-0.hlxcl7.lab.eng.tlv2.redhat.com)
    5. dded77c8-0cc3-4b99-8420-56cd2ae6a840 (GR_hlxcl7-master-1.hlxcl7.lab.eng.tlv2.redhat.com)
    6. 4f6747e6-e7ba-4e0c-8dcd-94c8efa51798 (GR_hlxcl7-master-2.hlxcl7.lab.eng.tlv2.redhat.com)
    7. 52232654-336e-4952-98b9-0b8601e370b4 (ovn_cluster_router)
  2. List the localnet ports by running the following command:

    1. $ ./debug-scripts/network-tools ovn-db-run-command \
    2. ovn-sbctl find Port_Binding type=localnet

    Example output

    1. Leader pod is ovnkube-master-vslqm
    2. _uuid : 3de79191-cca8-4c28-be5a-a228f0f9ebfc
    3. additional_chassis : []
    4. additional_encap : []
    5. chassis : []
    6. datapath : 3f1a4928-7ff5-471f-9092-fe5f5c67d15c
    7. encap : []
    8. external_ids : {}
    9. gateway_chassis : []
    10. ha_chassis_group : []
    11. logical_port : br-ex_helix13.lab.eng.tlv2.redhat.com
    12. mac : [unknown]
    13. nat_addresses : []
    14. options : {network_name=physnet}
    15. parent_port : []
    16. port_security : []
    17. requested_additional_chassis: []
    18. requested_chassis : []
    19. tag : []
    20. tunnel_key : 2
    21. type : localnet
    22. up : false
    23. virtual_parent : []
    24. _uuid : dbe21daf-9594-4849-b8f0-5efbfa09a455
    25. additional_chassis : []
    26. additional_encap : []
    27. chassis : []
    28. datapath : db2a6067-fe7c-4d11-95a7-ff2321329e11
    29. encap : []
    30. external_ids : {}
    31. gateway_chassis : []
    32. ha_chassis_group : []
    33. logical_port : br-ex_hlxcl7-master-2.hlxcl7.lab.eng.tlv2.redhat.com
    34. mac : [unknown]
    35. nat_addresses : []
    36. options : {network_name=physnet}
    37. parent_port : []
    38. port_security : []
    39. requested_additional_chassis: []
    40. requested_chassis : []
    41. tag : []
    42. tunnel_key : 2
    43. type : localnet
    44. up : false
    45. virtual_parent : []
    46. [...]
  3. List the l3gateway ports by running the following command:

    1. $ ./debug-scripts/network-tools ovn-db-run-command \
    2. ovn-sbctl find Port_Binding type=l3gateway

    Example output

    1. Leader pod is ovnkube-master-vslqm
    2. _uuid : 9314dc80-39e1-4af7-9cc0-ae8a9708ed59
    3. additional_chassis : []
    4. additional_encap : []
    5. chassis : 336a923d-99e8-4e71-89a6-12564fde5760
    6. datapath : db2a6067-fe7c-4d11-95a7-ff2321329e11
    7. encap : []
    8. external_ids : {}
    9. gateway_chassis : []
    10. ha_chassis_group : []
    11. logical_port : etor-GR_hlxcl7-master-2.hlxcl7.lab.eng.tlv2.redhat.com
    12. mac : ["52:54:00:3e:95:d3"]
    13. nat_addresses : ["52:54:00:3e:95:d3 10.46.56.77"]
    14. options : {l3gateway-chassis="7eb1f1c3-87c2-4f68-8e89-60f5ca810971", peer=rtoe-GR_hlxcl7-master-2.hlxcl7.lab.eng.tlv2.redhat.com}
    15. parent_port : []
    16. port_security : []
    17. requested_additional_chassis: []
    18. requested_chassis : []
    19. tag : []
    20. tunnel_key : 1
    21. type : l3gateway
    22. up : true
    23. virtual_parent : []
    24. _uuid : ad7eb303-b411-4e9f-8d36-d07f1f268e27
    25. additional_chassis : []
    26. additional_encap : []
    27. chassis : f41453b8-29c5-4f39-b86b-e82cf344bce4
    28. datapath : 082e7a60-d9c7-464b-b6ec-117d3426645a
    29. encap : []
    30. external_ids : {}
    31. gateway_chassis : []
    32. ha_chassis_group : []
    33. logical_port : etor-GR_helix14.lab.eng.tlv2.redhat.com
    34. mac : ["34:48:ed:f3:e2:2c"]
    35. nat_addresses : ["34:48:ed:f3:e2:2c 10.46.56.14"]
    36. options : {l3gateway-chassis="2e8abe3a-cb94-4593-9037-f5f9596325e2", peer=rtoe-GR_helix14.lab.eng.tlv2.redhat.com}
    37. parent_port : []
    38. port_security : []
    39. requested_additional_chassis: []
    40. requested_chassis : []
    41. tag : []
    42. tunnel_key : 1
    43. type : l3gateway
    44. up : true
    45. virtual_parent : []
    46. [...]
  4. List the patch ports by running the following command:

    1. $ ./debug-scripts/network-tools ovn-db-run-command \
    2. ovn-sbctl find Port_Binding type=patch

    Example output

    1. Leader pod is ovnkube-master-vslqm
    2. _uuid : c48b1380-ff26-4965-a644-6bd5b5946c61
    3. additional_chassis : []
    4. additional_encap : []
    5. chassis : []
    6. datapath : 72734d65-fae1-4bd9-a1ee-1bf4e085a060
    7. encap : []
    8. external_ids : {}
    9. gateway_chassis : []
    10. ha_chassis_group : []
    11. logical_port : jtor-ovn_cluster_router
    12. mac : [router]
    13. nat_addresses : []
    14. options : {peer=rtoj-ovn_cluster_router}
    15. parent_port : []
    16. port_security : []
    17. requested_additional_chassis: []
    18. requested_chassis : []
    19. tag : []
    20. tunnel_key : 4
    21. type : patch
    22. up : false
    23. virtual_parent : []
    24. _uuid : 5df51302-f3cd-415b-a059-ac24389938f7
    25. additional_chassis : []
    26. additional_encap : []
    27. chassis : []
    28. datapath : 0551c90f-e891-4909-8e9e-acc7909e06d0
    29. encap : []
    30. external_ids : {}
    31. gateway_chassis : []
    32. ha_chassis_group : []
    33. logical_port : rtos-hlxcl7-master-1.hlxcl7.lab.eng.tlv2.redhat.com
    34. mac : ["0a:58:0a:82:00:01 10.130.0.1/23"]
    35. nat_addresses : []
    36. options : {chassis-redirect-port=cr-rtos-hlxcl7-master-1.hlxcl7.lab.eng.tlv2.redhat.com, peer=stor-hlxcl7-master-1.hlxcl7.lab.eng.tlv2.redhat.com}
    37. parent_port : []
    38. port_security : []
    39. requested_additional_chassis: []
    40. requested_chassis : []
    41. tag : []
    42. tunnel_key : 4
    43. type : patch
    44. up : false
    45. virtual_parent : []
    46. [...]

Additional resources