Troubleshooting OVN-Kubernetes

OVN-Kubernetes has many sources of built-in health checks and logs.

Monitoring OVN-Kubernetes health by using readiness probes

The ovnkube-master and ovnkube-node pods have containers configured with readiness probes.

Prerequisites

  • Access to the OpenShift CLI (oc).

  • You have access to the cluster with cluster-admin privileges.

  • You have installed jq.

Procedure

  1. Review the details of the ovnkube-master readiness probe by running the following command:

    1. $ oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-master \
    2. -o json | jq '.items[0].spec.containers[] | .name,.readinessProbe'

    The readiness probe for the northbound and southbound database containers in the ovnkube-master pod checks for the health of the Raft cluster hosting the databases.

  2. Review the details of the ovnkube-node readiness probe by running the following command:

    1. $ oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-master \
    2. -o json | jq '.items[0].spec.containers[] | .name,.readinessProbe'

    The ovnkube-node container in the ovnkube-node pod has a readiness probe to verify the presence of the ovn-kubernetes CNI configuration file, the absence of which would indicate that the pod is not running or is not ready to accept requests to configure pods.

  3. Show all events including the probe failures, for the namespace by using the following command:

    1. $ oc get events -n openshift-ovn-kubernetes
  4. Show the events for just this pod:

    1. $ oc describe pod ovnkube-master-tp2z8 -n openshift-ovn-kubernetes
  5. Show the messages and statuses from the cluster network operator:

    1. $ oc get co/network -o json | jq '.status.conditions[]'
  6. Show the ready status of each container in ovnkube-master pods by running the following script:

    1. $ for p in $(oc get pods --selector app=ovnkube-master -n openshift-ovn-kubernetes \
    2. -o jsonpath='{range.items[*]}{" "}{.metadata.name}'); do echo === $p ===; \
    3. oc get pods -n openshift-ovn-kubernetes $p -o json | jq '.status.containerStatuses[] | .name, .ready'; \
    4. done

    The expectation is all container statuses are reporting as true. Failure of a readiness probe sets the status to false.

Additional resources

Viewing OVN-Kubernetes alerts in the console

The Alerting UI provides detailed information about alerts and their governing alerting rules and silences.

Prerequisites

  • You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.

Procedure (UI)

  1. In the Administrator perspective, select ObserveAlerting. The three main pages in the Alerting UI in this perspective are the Alerts, Silences, and Alerting Rules pages.

  2. View the rules for OVN-Kubernetes alerts by selecting ObserveAlertingAlerting Rules.

Viewing OVN-Kubernetes alerts in the CLI

You can get information about alerts and their governing alerting rules and silences from the command line.

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.

  • The OpenShift CLI (oc) installed.

  • You have installed jq.

Procedure

  1. View active or firing alerts by running the following commands.

    1. Set the alert manager route environment variable by running the following command:

      1. $ ALERT_MANAGER=$(oc get route alertmanager-main -n openshift-monitoring \
      2. -o jsonpath='{@.spec.host}')
    2. Issue a curl request to the alert manager route API with the correct authorization details requesting specific fields by running the following command:

      1. $ curl -s -k -H "Authorization: Bearer \
      2. $(oc create token prometheus-k8s -n openshift-monitoring)" \
      3. https://$ALERT_MANAGER/api/v1/alerts \
      4. | jq '.data[] | "\(.labels.severity) \(.labels.alertname) \(.labels.pod) \(.labels.container) \(.labels.endpoint) \(.labels.instance)"'
  2. View alerting rules by running the following command:

    1. $ oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -s 'http://localhost:9090/api/v1/rules' | jq '.data.groups[].rules[] | select(((.name|contains("ovn")) or (.name|contains("OVN")) or (.name|contains("Ovn")) or (.name|contains("North")) or (.name|contains("South"))) and .type=="alerting")'

Viewing the OVN-Kubernetes logs using the CLI

You can view the logs for each of the pods in the ovnkube-master and ovnkube-node pods using the OpenShift CLI (oc).

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.

  • Access to the OpenShift CLI (oc).

  • You have installed jq.

Procedure

  1. View the log for a specific pod:

    1. $ oc logs -f <pod_name> -c <container_name> -n <namespace>

    where:

    -f

    Optional: Specifies that the output follows what is being written into the logs.

    <pod_name>

    Specifies the name of the pod.

    <container_name>

    Optional: Specifies the name of a container. When a pod has more than one container, you must specify the container name.

    <namespace>

    Specify the namespace the pod is running in.

    For example:

    1. $ oc logs ovnkube-master-7h4q7 -n openshift-ovn-kubernetes
    1. $ oc logs -f ovnkube-master-7h4q7 -n openshift-ovn-kubernetes -c ovn-dbchecker

    The contents of log files are printed out.

  2. Examine the most recent entries in all the containers in the ovnkube-master pods:

    1. $ for p in $(oc get pods --selector app=ovnkube-master -n openshift-ovn-kubernetes \
    2. -o jsonpath='{range.items[*]}{" "}{.metadata.name}'); \
    3. do echo === $p ===; for container in $(oc get pods -n openshift-ovn-kubernetes $p \
    4. -o json | jq -r '.status.containerStatuses[] | .name');do echo ---$container---; \
    5. oc logs -c $container $p -n openshift-ovn-kubernetes --tail=5; done; done
  3. View the last 5 lines of every log in every container in an ovnkube-master pod using the following command:

    1. $ oc logs -l app=ovnkube-master -n openshift-ovn-kubernetes --all-containers --tail 5

Viewing the OVN-Kubernetes logs using the web console

You can view the logs for each of the pods in the ovnkube-master and ovnkube-node pods in the web console.

Prerequisites

  • Access to the OpenShift CLI (oc).

Procedure

  1. In the OKD console, navigate to WorkloadsPods or navigate to the pod through the resource you want to investigate.

  2. Select the openshift-ovn-kubernetes project from the drop-down menu.

  3. Click the name of the pod you want to investigate.

  4. Click Logs. By default for the ovnkube-master the logs associated with the northd container are displayed.

  5. Use the down-down menu to select logs for each container in turn.

Changing the OVN-Kubernetes log levels

The default log level for OVN-Kubernetes is 2. To debug OVN-Kubernetes set the log level to 5. Follow this procedure to increase the log level of the OVN-Kubernetes to help you debug an issue.

Prerequisites

  • You have access to the cluster with cluster-admin privileges.

  • You have access to the OpenShift Container Platform web console.

Procedure

  1. Run the following command to get detailed information for all pods in the OVN-Kubernetes project:

    1. $ oc get po -o wide -n openshift-ovn-kubernetes

    Example output

    1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    2. ovnkube-master-84nc9 6/6 Running 0 50m 10.0.134.156 ip-10-0-134-156.ec2.internal <none> <none>
    3. ovnkube-master-gmlqv 6/6 Running 0 50m 10.0.209.180 ip-10-0-209-180.ec2.internal <none> <none>
    4. ovnkube-master-nhts2 6/6 Running 1 (48m ago) 50m 10.0.147.31 ip-10-0-147-31.ec2.internal <none> <none>
    5. ovnkube-node-2cbh8 5/5 Running 0 43m 10.0.217.114 ip-10-0-217-114.ec2.internal <none> <none>
    6. ovnkube-node-6fvzl 5/5 Running 0 50m 10.0.147.31 ip-10-0-147-31.ec2.internal <none> <none>
    7. ovnkube-node-f4lzz 5/5 Running 0 24m 10.0.146.76 ip-10-0-146-76.ec2.internal <none> <none>
    8. ovnkube-node-jf67d 5/5 Running 0 50m 10.0.209.180 ip-10-0-209-180.ec2.internal <none> <none>
    9. ovnkube-node-np9mf 5/5 Running 0 40m 10.0.165.191 ip-10-0-165-191.ec2.internal <none> <none>
    10. ovnkube-node-qjldg 5/5 Running 0 50m 10.0.134.156 ip-10-0-134-156.ec2.internal <none> <none>
  2. Create a ConfigMap file similar to the following example and use a filename such as env-overrides.yaml:

    Example ConfigMap file

    1. kind: ConfigMap
    2. apiVersion: v1
    3. metadata:
    4. name: env-overrides
    5. namespace: openshift-ovn-kubernetes
    6. data:
    7. ip-10-0-217-114.ec2.internal: | (1)
    8. # This sets the log level for the ovn-kubernetes node process:
    9. OVN_KUBE_LOG_LEVEL=5
    10. # You might also/instead want to enable debug logging for ovn-controller:
    11. OVN_LOG_LEVEL=dbg
    12. ip-10-0-209-180.ec2.internal: |
    13. # This sets the log level for the ovn-kubernetes node process:
    14. OVN_KUBE_LOG_LEVEL=5
    15. # You might also/instead want to enable debug logging for ovn-controller:
    16. OVN_LOG_LEVEL=dbg
    17. _master: | (2)
    18. # This sets the log level for the ovn-kubernetes master process as well as the ovn-dbchecker:
    19. OVN_KUBE_LOG_LEVEL=5
    20. # You might also/instead want to enable debug logging for northd, nbdb and sbdb on all masters:
    21. OVN_LOG_LEVEL=dbg
    1Specify the name of the node you want to set the debug log level on.
    2Specify _master to set the log levels of ovnkube-master components.
  3. Apply the ConfigMap file by using the following command:

    1. $ oc create configmap env-overrides.yaml -n openshift-ovn-kubernetes

    Example output

    1. configmap/env-overrides.yaml created
  4. Restart the ovnkube pods to apply the new log level by using the following commands:

    1. $ oc delete pod -n openshift-ovn-kubernetes \
    2. --field-selector spec.nodeName=ip-10-0-217-114.ec2.internal -l app=ovnkube-node
    1. $ oc delete pod -n openshift-ovn-kubernetes \
    2. --field-selector spec.nodeName=ip-10-0-209-180.ec2.internal -l app=ovnkube-node
    1. $ oc delete pod -n openshift-ovn-kubernetes -l app=ovnkube-master

Checking the OVN-Kubernetes pod network connectivity

The connectivity check controller, in OKD 4.10 and later, orchestrates connection verification checks in your cluster. These include Kubernetes API, OpenShift API and individual nodes. The results for the connection tests are stored in PodNetworkConnectivity objects in the openshift-network-diagnostics namespace. Connection tests are performed every minute in parallel.

Prerequisites

  • Access to the OpenShift CLI (oc).

  • Access to the cluster as a user with the cluster-admin role.

  • You have installed jq.

Procedure

  1. To list the current PodNetworkConnectivityCheck objects, enter the following command:

    1. $ oc get podnetworkconnectivitychecks -n openshift-network-diagnostics
  2. View the most recent success for each connection object by using the following command:

    1. $ oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \
    2. -o json | jq '.items[]| .spec.targetEndpoint,.status.successes[0]'
  3. View the most recent failures for each connection object by using the following command:

    1. $ oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \
    2. -o json | jq '.items[]| .spec.targetEndpoint,.status.failures[0]'
  4. View the most recent outages for each connection object by using the following command:

    1. $ oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \
    2. -o json | jq '.items[]| .spec.targetEndpoint,.status.outages[0]'

    The connectivity check controller also logs metrics from these checks into Prometheus.

  5. View all the metrics by running the following command:

    1. $ oc exec prometheus-k8s-0 -n openshift-monitoring -- \
    2. promtool query instant http://localhost:9090 \
    3. '{component="openshift-network-diagnostics"}'
  6. View the latency between the source pod and the openshift api service for the last 5 minutes:

    1. $ oc exec prometheus-k8s-0 -n openshift-monitoring -- \
    2. promtool query instant http://localhost:9090 \
    3. '{component="openshift-network-diagnostics"}'

Additional resources