Gathering data about your cluster

You can use the following tools to get debugging information about your OKD cluster.

About the must-gather tool

The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including:

  • Resource definitions

  • Service logs

By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local.

Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections:

  • To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section.

    For example:

    1. $ oc adm must-gather \
    2. --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.0
  • To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section.

    For example:

    1. $ oc adm must-gather -- /usr/bin/gather_audit_logs

    Audit logs are not collected as part of the default set of information to reduce the size of the files.

When you run oc adm must-gather, a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local in the current working directory.

For example:

  1. NAMESPACE NAME READY STATUS RESTARTS AGE
  2. ...
  3. openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s
  4. openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s
  5. ...

Optionally, you can run the oc adm must-gather command in a specific namespace by using the --run-namespace option.

For example:

  1. $ oc adm must-gather --run-namespace <namespace> \
  2. --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.0

Gathering data about specific features

You can gather debugging information about specific features by using the oc adm must-gather CLI command with the --image or --image-stream argument. The must-gather tool supports multiple images, so you can gather data about more than one feature by running a single command.

Table 1. Available must-gather images
ImagePurpose

quay.io/kubevirt/must-gather

Data collection for KubeVirt.

quay.io/openshift-knative/must-gather

Data collection for Knative.

docker.io/maistra/istio-must-gather

Data collection for service mesh.

quay.io/konveyor/must-gather

Data collection for migration-related information.

quay.io/ocs-dev/ocs-must-gather

Data collection for OpenShift Data Foundation.

quay.io/openshift/origin-cluster-logging-operator

Data collection for OpenShift Logging.

quay.io/openshift/origin-local-storage-mustgather

Data collection for Local Storage Operator.

quay.io/openshift/origin-secrets-store-csi-mustgather

Data collection for the Secrets Store CSI Driver Operator.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

  • The OKD CLI (oc) is installed.

Procedure

  1. Navigate to the directory where you want to store the must-gather data.

  2. Run the oc adm must-gather command with one or more --image or --image-stream arguments. For example, the following command gathers both the default cluster data and information specific to KubeVirt:

    1. $ oc adm must-gather \
    2. --image-stream=openshift/must-gather \ (1)
    3. --image=quay.io/kubevirt/must-gather (2)
    1The default OKD must-gather image
    2The must-gather image for KubeVirt

Additional resources

Gathering network logs

You can gather network logs on all nodes in a cluster.

Procedure

  1. Run the oc adm must-gather command with -- gather_network_logs:

    1. $ oc adm must-gather -- gather_network_logs

By default, the must-gather tool collects the OVN nbdb and sbdb databases from all of the nodes in the cluster. Adding the — gather_network_logs option to include additional logs that contain OVN-Kubernetes transactions for OVN nbdb database.

  1. Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:

    1. $ tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 (1)
    1Replace must-gather-local.472290403699006248 with the actual directory name.
  2. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal.

Querying bootstrap node journal logs

If you experience bootstrap-related issues, you can gather bootkube.service journald unit logs and container logs from the bootstrap node.

Prerequisites

  • You have SSH access to your bootstrap node.

  • You have the fully qualified domain name of the bootstrap node.

Procedure

  1. Query bootkube.service journald unit logs from a bootstrap node during OKD installation. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:

    1. $ ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service

    The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop.

  2. Collect logs from the bootstrap node containers using podman on the bootstrap node. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:

    1. $ ssh core@<bootstrap_fqdn> 'for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done'

Querying cluster node journal logs

You can gather journald unit logs and other logs within /var/log on individual cluster nodes.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

  • Your API service is still functional.

  • You have installed the OpenShift CLI (oc).

  • You have SSH access to your hosts.

Procedure

  1. Query kubelet journald unit logs from OKD cluster nodes. The following example queries control plane nodes only:

    1. $ oc adm node-logs --role=master -u kubelet (1)
    1Replace kubelet as appropriate to query other unit logs.
  2. Collect logs from specific subdirectories under /var/log/ on cluster nodes.

    1. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/openshift-apiserver/ on all control plane nodes:

      1. $ oc adm node-logs --role=master --path=openshift-apiserver
    2. Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/openshift-apiserver/audit.log contents from all control plane nodes:

      1. $ oc adm node-logs --role=master --path=openshift-apiserver/audit.log
    3. If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/openshift-apiserver/audit.log:

      1. $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log

      OKD 4.14 cluster nodes running Fedora CoreOS (FCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OKD API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain>.

Collecting a host network trace

Sometimes, troubleshooting a network-related issue is simplified by tracing network communication and capturing packets on multiple nodes at the same time.

You can use a combination of the oc adm must-gather command and the quay.io/openshift/origin-network-tools:latest container image to gather packet captures from nodes. Analyzing packet captures can help you troubleshoot network communication issues.

The oc adm must-gather command is used to run the tcpdump command in pods on specific nodes. The tcpdump command records the packet captures in the pods. When the tcpdump command exits, the oc adm must-gather command transfers the files with the packet captures from the pods to your client machine.

The sample command in the following procedure demonstrates performing a packet capture with the tcpdump command. However, you can run any command in the container image that is specified in the —image argument to gather troubleshooting information from multiple nodes at the same time.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Run a packet capture from the host network on some nodes by running the following command:

    1. $ oc adm must-gather \
    2. --dest-dir /tmp/captures \ (1)
    3. --source-dir '/tmp/tcpdump/' \ (2)
    4. --image quay.io/openshift/origin-network-tools:latest \ (3)
    5. --node-selector 'node-role.kubernetes.io/worker' \ (4)
    6. --host-network=true \ (5)
    7. --timeout 30s \ (6)
    8. -- \
    9. tcpdump -i any \ (7)
    10. -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300
    1The —dest-dir argument specifies that oc adm must-gather stores the packet captures in directories that are relative to /tmp/captures on the client machine. You can specify any writable directory.
    2When tcpdump is run in the debug pod that oc adm must-gather starts, the —source-dir argument specifies that the packet captures are temporarily stored in the /tmp/tcpdump directory on the pod.
    3The —image argument specifies a container image that includes the tcpdump command.
    4The —node-selector argument and example value specifies to perform the packet captures on the worker nodes. As an alternative, you can specify the —node-name argument instead to run the packet capture on a single node. If you omit both the —node-selector and the —node-name argument, the packet captures are performed on all nodes.
    5The —host-network=true argument is required so that the packet captures are performed on the network interfaces of the node.
    6The —timeout argument and value specify to run the debug pod for 30 seconds. If you do not specify the —timeout argument and a duration, the debug pod runs for 10 minutes.
    7The -i any argument for the tcpdump command specifies to capture packets on all network interfaces. As an alternative, you can specify a network interface name.
  2. Perform the action, such as accessing a web application, that triggers the network communication issue while the network trace captures packets.

  3. Review the packet capture files that oc adm must-gather transferred from the pods to your client machine:

    1. tmp/captures
    2. ├── event-filter.html
    3. ├── ip-10-0-192-217-ec2-internal (1)
    4. └── quay.io/openshift/origin-network-tools:latest...
    5. └── 2022-01-13T19:31:31.pcap
    6. ├── ip-10-0-201-178-ec2-internal (1)
    7. └── quay.io/openshift/origin-network-tools:latest...
    8. └── 2022-01-13T19:31:30.pcap
    9. ├── ip-...
    10. └── timestamp
    1The packet captures are stored in directories that identify the hostname, container, and file name. If you did not specify the —node-selector argument, then the directory level for the hostname is not present.

About toolbox

toolbox is a tool that starts a container on a Fedora CoreOS (FCOS) system. The tool is primarily used to start a container that includes the required binaries and plugins that are needed to run your favorite debugging or admin tools.

Installing packages to a toolbox container

By default, running the toolbox command starts a container with the quay.io/fedora/fedora:36 image. This image contains the most frequently used support tools. If you need to collect node-specific data that requires a support tool that is not part of the image, you can install additional packages.

Prerequisites

  • You have accessed a node with the oc debug node/<node_name> command.

Procedure

  1. Set /host as the root directory within the debug shell. The debug pod mounts the host’s root file system in /host within the pod. By changing the root directory to /host, you can run binaries contained in the host’s executable paths:

    1. # chroot /host
  2. Start the toolbox container:

    1. # toolbox
  3. Install the additional package, such as wget:

    1. # dnf install -y <package_name>

Starting an alternative image with toolbox

By default, running the toolbox command starts a container with the quay.io/fedora/fedora:36 image. You can start an alternative image by creating a .toolboxrc file and specifying the image to run.

Prerequisites

  • You have accessed a node with the oc debug node/<node_name> command.

Procedure

  1. Set /host as the root directory within the debug shell. The debug pod mounts the host’s root file system in /host within the pod. By changing the root directory to /host, you can run binaries contained in the host’s executable paths:

    1. # chroot /host
  2. Create a .toolboxrc file in the home directory for the root user ID:

    1. # vi ~/.toolboxrc
    1. REGISTRY=quay.io (1)
    2. IMAGE=fedora/fedora:33-x86_64 (2)
    3. TOOLBOX_NAME=toolbox-fedora-33 (3)
    1Optional: Specify an alternative container registry.
    2Specify an alternative image to start.
    3Optional: Specify an alternative name for the toolbox container.
  3. Start a toolbox container with the alternative image:

    1. # toolbox

    If an existing toolbox pod is already running, the toolbox command outputs ‘toolbox-‘ already exists. Trying to start…​. Remove the running toolbox container with podman rm toolbox- and spawn a new toolbox container, to avoid issues with sosreport plugins.