Using the CRI-O Container Engine

CRI-O is an open source, community-driven container engine. Its primary goal is to replace the Docker service as the container engine for Kubernetes implementations, such as OKD.

If you want to start using CRI-O, this guide describes how to install CRI-O during OKD installation as well as how to add a CRI-O node to an existing OKD cluster. The guide also provides information on how to configure and troubleshoot your CRI-O engine.

Understanding CRI-O

The CRI-O container engine provides a stable, more secure, and performant platform for running Open Container Initiative (OCI) compatible runtimes. You can use the CRI-O container engine to launch containers and pods by engaging OCI-compliant runtimes like runc, the default OCI runtime, or Kata Containers. CRI-O’s purpose is to be the container engine that implements the Kubernetes Container Runtime Interface (CRI) for OKD and Kubernetes, replacing the Docker service.

CRI-O offers a streamlined container engine, while other container features are implemented as a separate set of innovative, independent commands. This approach allows container management features to develop at their own pace, without impeding CRI-O’s primary goal of being a container engine for Kubernetes-based installations.

CRI-O’s stability comes from the facts that it is developed, tested, and released in tandem with Kubernetes major and minor releases and that it follows OCI standards. For example, CRI-O 1.11 aligns with Kubernetes 1.11. The scope of CRI-O is tied to the Container Runtime Interface (CRI). CRI extracted and standardized exactly what a Kubernetes service (kubelet) needed from its container engine. The CRI team did this to help stabilize Kubernetes container engine requirements as multiple container engines began to be developed.

There is little need for direct command-line contact with CRI-O. However, to provide full access to CRI-O for testing and monitoring, and to provide features you expect with Docker that CRI-O does not offer, a set of container-related command-line tools are available. These tools replace and extend what is available with the docker command and service. Tools include:

  • crictl - For troubleshooting and working directly with CRI-O container engines

  • runc - For running container images

  • podman - For managing pods and container images (run, stop, start, ps, attach, exec, etc.) outside of the container engine

  • buildah - For building, pushing and signing container images

  • skopeo - For copying, inspecting, deleting, and signing images

Some Docker features are included in other tools instead of in CRI-O. For example, podman offers exact command-line compatibility with many docker command features and extends those features to managing pods as well. No container engine is needed to run containers or pods with podman.

Features for building, pushing, and signing container images, which are also not required in a container engine, are available in the buildah command. For more information about these command alternatives to docker, see Finding, Running and Building Containers without Docker.

Getting CRI-O

CRI-O is not supported as a stand-alone container engine. You must use CRI-O as a container engine for a Kubernetes installation, such as OKD. To run containers without Kubernetes or OKD, use podman.

To set up a CRI-O container engine to use with an OKD cluster, you can:

  • Install CRI-O along with a new OKD cluster or

  • Add a node to an existing cluster and identify CRI-O as the container engine for that node. Both CRI-O and Docker nodes can exist on the same cluster.

The following section describes how to install CRI-O with a new OKD cluster

Installing CRI-O with a new OKD cluster

You can choose CRI-O as the container engine for your OKD nodes on a per-node basis at install time. Here are a few things you should know about enabling the CRI-O container engine when you install OKD:

  • Previously, using CRI-O on your nodes required that the Docker container engine be available as well. As of OKD 3.10 and later, the Docker container engine is no longer required in all cases. Now you can now have CRI-O-only nodes in your OKD cluster. However, nodes that do build and push operations still need to have the Docker container engine installed along with CRI-O.

  • Enabling CRI-O using a CRI-O container is no longer supported. An rpm-based installation of CRI-O is required.

The following procedure assumes you are installing OKD using Ansible inventory files, such as those described in Configuring Your Inventory File.

Do not set /var/lib/docker as a separate mount point for an OKD node using CRI-O as its container engine. When deploying a CRI-O node, the installer tries to make /var/lib/docker a symbolic link to /var/lib/containers. That action will fail because it won’t be able to remove the existing /var/lib/docker to create the symbolic link.

  1. With the OKD Ansible playbooks installed, edit the appropriate inventory file to enable CRI-O.

  2. Locate CRI-O setting in your selected inventory file. To have the CRI-O container engine installed on your nodes during OKD installation, locate the [OSEv3:vars] section of an Ansible inventory file. A section of CRI-O settings might include the following:

    1. [OSEv3:vars]
    2. ...
    3. # Install and run cri-o.
    4. #openshift_use_crio=False
    5. #openshift_use_crio_only=False
    6. # The following two variables are used when openshift_use_crio is True
    7. # and cleans up after builds that pass through docker. When openshift_use_crio is True
    8. # these variables are set to the defaults shown. You may override them here.
    9. # NOTE: You will still need to tag crio nodes with your given label(s)!
    10. # Enable docker garbage collection when using cri-o
    11. #openshift_crio_enable_docker_gc=True
    12. # Node Selectors to run the garbage collection
    13. #openshift_crio_docker_gc_node_selector={'runtime': 'cri-o'}
  3. Enable CRI-O settings. You can decide to either enable CRI-O alone or CRI-O alongside Docker. The following settings allow CRI-O and Docker as your node container engines and enables Docker garbage collection on nodes with overlay2 storage:

    To be able to build containers on CRI-O nodes, you must have the Docker container engine installed. If you want to have CRI-O-only nodes, you can do that and simply designate other nodes to do container builds.

    1. [OSEv3:vars]
    2. ...
    3. openshift_use_crio=True
    4. openshift_use_crio_only=False
    5. openshift_crio_enable_docker_gc=True
  4. Set the openshift_node_group_name for each node to a configmap that configures the kubelet for the CRI-O runtime. There’s a corresponding CRI-O configmap for all the default node groups. Defining Node Groups and Host Mappings covers node groups and mappings in detail.

    1. [nodes]
    2. ocp-crio01 openshift_node_group_name='node-config-all-in-one-crio'
    3. ocp-docker01 openshift_node_group_name='node-config-all-in-one'

This will automatically install the necessary CRI-O packages.

The resulting OKD configuration will be running the CRI-O container engine on the nodes of your OKD installation. Use the oc command to check the status of the nodes and identify the nodes running CRI-O:

  1. $ oc get nodes -o wide
  2. NAME STATUS ROLES AGE ... CONTAINER-RUNTIME
  3. ocp-crio01 Ready compute,infra,master 16d ... cri-o://1.11.5
  4. ocp-docker01 Ready compute,infra,master 16d ... docker://1.13.1

Adding CRI-O nodes to an OKD cluster

OKD does not support the direct upgrading of nodes from using the docker container engine to using CRI-O. To upgrade an existing OKD cluster to use CRI-O, do the following:

  • Scale up a node that is configured to use the CRI-O container engine

  • Check that the CRI-O node performs as expected

  • Add more CRI-O nodes as needed

  • Scale down Docker nodes as the cluster stabilizes

To see what actions are taken when you create a node with the CRI-O container engine, refer to Upgrading to CRI-O with Ansible.

If you are upgrading your entire OKD cluster to OKD 3.10 or later, and a containerized version of CRI-O is running on a node, the CRI-O container will be removed from that node and the CRI-O rpm will be installed. The CRI-O service will be run as a systemd service from then on. See BZ#1618425 for details.

Configuring CRI-O

Because CRI-O is intended to be deployed, upgraded and managed by OKD, you should only change CRI-O configuration files through OKD or for the purposes of testing or troubleshooting CRI-O. On a running OKD node, most CRI-O configuration settings are kept in the /etc/crio/crio.conf file.

Settings in a crio.conf file define how storage, the listening socket, runtime features, and networking are configured for CRI-O. Here’s an example of the default crio.conf file (look in the file itself to see comments describing these settings):

  1. [crio]
  2. root = "/var/lib/containers/storage"
  3. runroot = "/var/run/containers/storage"
  4. storage_driver = "overlay"
  5. storage_option = [
  6. "overlay.override_kernel_check=1",
  7. ]
  8. [crio.api]
  9. listen = "/var/run/crio/crio.sock"
  10. stream_address = ""
  11. stream_port = "10010"
  12. file_locking = true
  13. [crio.runtime]
  14. runtime = "/usr/bin/runc"
  15. runtime_untrusted_workload = ""
  16. default_workload_trust = "trusted"
  17. no_pivot = false
  18. conmon = "/usr/libexec/crio/conmon"
  19. conmon_env = [
  20. "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
  21. ]
  22. selinux = true
  23. seccomp_profile = "/etc/crio/seccomp.json"
  24. apparmor_profile = "crio-default"
  25. cgroup_manager = "systemd"
  26. hooks_dir_path = "/usr/share/containers/oci/hooks.d"
  27. default_mounts = [
  28. "/usr/share/rhel/secrets:/run/secrets",
  29. ]
  30. pids_limit = 1024
  31. enable_shared_pid_namespace = false
  32. log_size_max = 52428800
  33. [crio.image]
  34. default_transport = "docker://"
  35. pause_image = "docker.io/openshift/origin-pod:v3.11"
  36. pause_command = "/usr/bin/pod"
  37. signature_policy = ""
  38. image_volumes = "mkdir"
  39. insecure_registries = [
  40. ""
  41. ]
  42. registries = [
  43. "docker.io"
  44. ]
  45. [crio.network]
  46. network_dir = "/etc/cni/net.d/"
  47. plugin_dir = "/opt/cni/bin"

The following sections describe how different CRI-O configurations might be used in the crio.conf file.

Configuring CRI-O storage

OverlayFS2 is the recommended (and default) storage driver for OKD, whether you use CRI-O or Docker as your container engine. See Choosing a graph driver for details on available storage devices.

Although devicemapper is a supported storage facility for CRI-O, the CRI-O garbage collection feature does not yet work with devicemapper and so is not recommended for production use. Also, see BZ1625394 and BZ1623944 for other devicemapper issues that apply to how both CRI-O and podman use container storage.

Things you should know about CRI-O storage include the facts that CRI-O storage:

  • Holds images by storing the root filesystem of each container, along with any layers that go with it.

  • Incorporates the same storage layer that is used with the Docker service.

  • Uses container-storage-setup to manage the container storage area.

  • Uses configuration information from the /etc/containers/storage.conf and /etc/crio/crio.conf files.

  • Stores data in /var/lib/containers by default. That directory is used by both CRI-O and tools for running containers (such as podman).

Although they use the same storage directory, the container engine and the container tools manage their containers separately.

  • Can store both Docker version 1 and version 2 schemas.

For information on using container-storage-setup to configure storage for CRI-O, see Using container-storage-setup.

Configuring CRI-O networking

CRI-O supports networking facilities that are compatible with the Container Network Interface (CNI). Supported networking features include loopback, flannel, and openshift-sdn, which are implemented as network plugins.

By default, OKD uses openshift-sdn networking. The following settings in the crio.conf file define where CNI network configuration files are stored (/etc/cni/net.d/) and where CNI plugin binaries are stored (/opt/cni/bin/)

  1. [crio.network]
  2. network_dir = "/etc/cni/net.d/"
  3. plugin_dir = "/opt/cni/bin/"

To understand the networking features needed by CRI-O in OKD, refer to both Kubernetes and OKD networking requirements.

Troubleshooting CRI-O

To check the health of your CRI-O container engine and troubleshoot problems, you can use the crictl command, along with some well-known Linux and OKD commands. As with any OKD container engine, you can use commands such as oc and kubectl to investigate the pods in CRI-O as well.

For example, to list pods, run the following:

  1. $ sudo oc get pods -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
  3. docker-registry-1-fb2g8 1/1 Running 1 5d 10.128.0.4 hostA <none>
  4. registry-console-1-vktl6 1/1 Running 0 5d 10.128.0.6 hostA <none>
  5. router-1-hjfm7 1/1 Running 0 5d 192.168.122.188 hostA <none>

To ensure that a pod is running in CRI-O, use the describe option and grep for cri-o:

  1. $ sudo oc describe pods registry-console-1-vktl6 | grep cri-o
  2. Container ID: cri-o://9a9209dc0608ce80f62bb4d7f7df61bcf8dd2abd77ef53075dee0542548238b7

To query and debug a CRI-O container runtime, run the crictl command to communicate directly with CRI-O. The CRI-O instance that crictl uses is identified in the crictl.yaml file.

  1. # cat /etc/crictl.yaml
  2. runtime-endpoint: /var/run/crio/crio.sock

By default, the crictl.yaml file causes crictl to point to the CRI-O socket on the local system. To see options available with crictl, run crictl with no arguments. To get help with a particular option, add --help. For example:

  1. $ sudo crictl ps --help
  2. NAME:
  3. crictl ps - List containers
  4. USAGE:
  5. crictl ps [command options] [arguments...]
  6. OPTIONS:
  7. --all, -a Show all containers
  8. --id value Filter by container id
  9. --label value Filter by key=value label
  10. ...

Checking CRI-O’s general health

Log into a node in your OKD cluster that is running CRI-O and run the following commands to check the general health of the CRI-O container engine:

Check that the CRI-O related packages are installed. That includes the crio (CRI-O daemon and config files) and cri-tools (crictl command) packages:

  1. # rpm -qa | grep ^cri-
  2. cri-o-1.11.6-1.rhaos3.11.git2d0f8c7.el7.x86_64
  3. cri-tools-1.11.1-1.rhaos3.11.gitedabfb5.el7_5.x86_64

Check that the crio service is running:

  1. # systemctl status -l crio
  2. crio.service - Open Container Initiative Daemon
  3. Loaded: loaded (/usr/lib/systemd/system/crio.service; enabled; vendor preset: disabled)
  4. Active: active (running) since Tue 2018-10-16 15:15:49 UTC; 3h 30min ago
  5. Docs: https://github.com/kubernetes-sigs/cri-o
  6. Main PID: 889 (crio)
  7. Tasks: 14
  8. Memory: 2.3G
  9. CGroup: /system.slice/crio.service
  10. └─889 /usr/bin/crio
  11. Oct 16 15:15:48 hostA systemd[1]: Starting Open Container Initiative Daemon...
  12. Oct 16 15:15:49 hostA systemd[1]: Started Open Container Initiative Daemon.
  13. Oct 16 18:30:55 hostA crio[889]: time="2018-10-16 18:30:55.128074704Z" level=error

Inspecting CRI-O logs

Because the CRI-O container engine is implemented as a systemd service, you can use the standard journalctl command to inspect log messages for CRI-O.

Checking crio and origin-node logs

To check the journal for information from the crio service, use the -u option. In this example, you can see that the service is running, but a pod failed to start:

  1. $ sudo journalctl -u crio
  2. -- Logs begin at Tue 2018-10-16 15:01:31 UTC, end at Tue 2018-10-16 19:10:52 UTC. --
  3. Oct 16 15:05:42 hostA systemd[1]: Starting Open Container Initiative Daemon...
  4. Oct 16 15:05:42 hostA systemd[1]: Started Open Container Initiative Daemon.
  5. Oct 16 15:06:35 hostA systemd[1]: Stopping Open Container Initiative Daemon...
  6. Oct 16 15:06:35 hostA crio[4863]: time="2018-10-16 15:06:35.018523314Z" level=error msg="Failed to start streaming server: http: Server closed"
  7. Oct 16 15:06:35 hostA systemd[1]: Starting Open Container Initiative Daemon...
  8. Oct 16 15:06:35 hostA systemd[1]: Started Open Container Initiative Daemon.
  9. Oct 16 15:10:27 hostA crio[6874]: time="2018-10-16 15:10:26.900411457Z" level=error msg="Failed to start streaming server: http: Server closed"
  10. Oct 16 15:10:26 hostA systemd[1]: Stopping Open Container Initiative Daemon...
  11. Oct 16 15:10:27 hostA systemd[1]: Stopped Open Container Initiative Daemon.
  12. -- Reboot --
  13. Oct 16 15:15:48 hostA systemd[1]: Starting Open Container Initiative Daemon...
  14. Oct 16 15:15:49 hostA systemd[1]: Started Open Container Initiative Daemon.
  15. Oct 16 18:30:55 hostA crio[889]: time="2018-10-16 18:30:55.128074704Z" level=error msg="Error adding network: CNI request failed with status 400: 'pods "

You can also check the origin-node service for CRI-O related messages. For example:

  1. $ sudo journalctl -u origin-node | grep -i cri-o
  2. Oct 16 15:26:30 hostA origin-node[10624]: I1016 15:26:30.120889 10624
  3. kuberuntime_manager.go:186] Container runtime cri-o initialized,
  4. version: 1.11.6, apiVersion: v1alpha1
  5. Oct 16 15:26:30 hostA origin-node[10624]: I1016 15:26:30.177213 10624
  6. factory.go:157] Registering CRI-O factory
  7. Oct 16 15:27:27 hostA origin-node[11107]: I1016 15:27:27.449197 11107
  8. kuberuntime_manager.go:186] Container runtime cri-o initialized,
  9. version: 1.11.6, apiVersion: v1alpha1
  10. Oct 16 15:27:27 hostA origin-node[11107]: I1016 15:27:27.507030 11107
  11. factory.go:157] Registering CRI-O factory
  12. Oct 16 19:27:56 hostA origin-node[8326]: I1016 19:27:56.224770 8326
  13. kuberuntime_manager.go:186] Container runtime cri-o initialized,
  14. version: 1.11.6, apiVersion: v1alpha1
  15. Oct 16 19:27:56 hostA origin-node[8326]: I1016 19:27:56.282138 8326
  16. factory.go:157] Registering CRI-O factory
  17. Oct 16 19:27:57 hostA origin-node[8326]: I1016 19:27:57.783304 8326
  18. status_manager.go:375] Status Manager: adding pod:
  19. "db1f45e3-d157-11e8-8645-42010a8e0002", with status: ('\x01', {Running ...
  20. docker.io/openshift/origin-node:v3.11 docker.io/openshift/origin-node@sha256:6f9b0fbdd...
  21. cri-o://c94cc6
  22. 2c27d021d61e8b7c1a82703d51db5847e74f5e57c667432f90c07013e4}] Burstable}) to
  23. podStatusChannel

If you wanted to further investigate what was happening with one of the pods listed, (such as the last one shown as cri-o//c94cc6), you can use the crictl logs command:

  1. $ sudo crictl logs c94cc6
  2. /etc/openvswitch/conf.db does not exist ... (warning).
  3. Creating empty database /etc/openvswitch/conf.db [ OK ]
  4. Starting ovsdb-server [ OK ]
  5. Configuring Open vSwitch system IDs [ OK ]
  6. Inserting openvswitch module [ OK ]
  7. Starting ovs-vswitchd [ OK ]
  8. Enabling remote OVSDB managers [ OK ]

Turning on debugging for CRI-O

To get more details from the logging facility for CRI-O, you can temporarily set the loglevel to debug as follows:

  1. Edit the /usr/lib/systemd/system/crio.service file and add —loglevel=debug to the ExecStart= line so it appears as follows:

    1. ExecStart=/usr/bin/crio --log-level=debug \
    2. $CRIO_STORAGE_OPTIONS \
    3. $CRIO_NETWORK_OPTIONS
  2. Reload the configuration file and restart the service as follows:

    1. # systemctl daemon-reload
    2. # systemctl restart crio
  3. Run the journalctl command again. You should begin to see lots of debug messages, representing the processing going on with your CRI-O service:

    1. # journalctl -u crio
    2. Oct 18 08:41:31 mynode01-crio crio[21998]:
    3. time="2018-10-18 08:41:31.839702058-04:00" level=debug
    4. msg="ListContainersRequest &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:
    5. ,LabelSelector:map[string]string{},},}"
    6. Oct 18 08:41:31 mynode01-crio crio[21998]: time="2018-10-18
    7. 08:41:31.839928476-04:00" level=debug msg="no filters were applied,
    8. returning full container list"
    9. Oct 18 08:41:31 mynode01-crio crio[21998]: time="2018-10-18 08:41:31.841814536-04:00"
    10. level=debug msg="ListContainersResponse: &ListContainersResponse{Containers:
    11. [&Container{Id:e1934cc46696ff821bc35154f281764e80ac1122563ffd95aa92d01477225603,
    12. PodSandboxId:d904d45e6e46110a044758f20047805d8832b6859e10dc903c104cf757894e8d,
    13. Metadata:&ContainerMetadata{Name:c,Attempt:0,},Image:&ImageSpec{
    14. Image:e72de76ca8d5410497ae3171b6b059e7c7d11e4d1f3225df8d05812f29e205b7,},
    15. ImageRef:docker.io/openshift/origin-template-service-broker@sha256:fd539 ...
  4. Remove the --loglevel=debug option when you are done investigating, to reduce the amount of messages generated. Then rerun the two systemctl commands:

    1. # systemctl daemon-reload
    2. # systemctl restart crio

Troubleshooting CRI-O pods, and containers

With the crictl command, you interface directly with the CRI-O container engine to check on and manipulate the containers, images, and pods associated with that container engine. The runc container runtime is another way to interact with CRI-O. If you want to run containers outside of the CRI-O container engine, for example to run support-tools on a node, you can use the podman command.

See Crictl vs. Podman for descriptions of those two commands and how they differ.

To begin, you can check the general status of the CRI-O service using the crictl info and crictl version commands:

  1. $ sudo crictl info
  2. {
  3. "status": {
  4. "conditions": [
  5. {
  6. "type": "RuntimeReady",
  7. "status": true,
  8. "reason": "",
  9. "message": ""
  10. },
  11. {
  12. "type": "NetworkReady",
  13. "status": true,
  14. "reason": "",
  15. "message": ""
  16. }
  17. ]
  18. }
  19. }
  20. $ sudo crictl version
  21. Version: 0.1.0
  22. RuntimeName: cri-o
  23. RuntimeVersion: 1.11.6
  24. RuntimeApiVersion: v1alpha1

Listing images, pods, and containers

The crictl command provides options for investigating the components in your CRI-O environment. Here are examples of some of the uses of crictl for listing information about images, pods, and containers.

To see the images that have been pulled to the local CRI-O node, run the crictl images command:

  1. $ sudo crictl images
  2. IMAGE TAG IMAGE ID SIZE
  3. docker.io/openshift/oauth-proxy v1.1.0 90c45954eb03e 242MB
  4. docker.io/openshift/origin-haproxy-router v3.11 13f40ad4d2e21 410MB
  5. docker.io/openshift/origin-node v3.11 93d2aeddcd6db 1.17GB
  6. docker.io/openshift/origin-pod v3.11 89ceff8fb1907 263MB
  7. docker.io/openshift/prometheus-alertmanager v0.15.2 68bbd00063784 242MB
  8. docker.io/openshift/prometheus-node-exporter v0.16.0 f9f775bf6d0ef 225MB
  9. quay.io/coreos/cluster-monitoring-operator v0.1.1 4488a207a5bca 531MB
  10. quay.io/coreos/configmap-reload v0.0.1 3129a2ca29d75 4.79MB
  11. quay.io/coreos/kube-rbac-proxy v0.3.1 992ac1a5e7c79 40.4MB
  12. quay.io/coreos/kube-state-metrics v1.3.1 a9c8f313b7aad 22.2MB

To see the pods that are currently active in the CRI-O environment, run crictl pods:

  1. $ sudo crictl pods
  2. POD ID CREATED STATE NAME NAMESPACE ATTEMPT
  3. 09997515d7729 5 hours ago Ready kube-state-metrics-... openshift-monitoring 0
  4. 958b0789e0552 5 hours ago Ready node-exporter-rkbzp openshift-monitoring 0
  5. 4ec0498dacec8 5 hours ago Ready alertmanager-main-0 openshift-monitoring 0
  6. 2873b697df1d2 5 hours ago Ready cluster-monitoring-... openshift-monitoring 0
  7. b9e221481fb7e 5 hours ago Ready router-1-968t4 default 0
  8. f02ce4a4b4186 5 hours ago Ready sdn-c45cm openshift-sdn 0
  9. bdf5b1dcc0a08 5 hours ago Ready ovs-kdvzs openshift-sdn 0
  10. 49dbc57455c8f 5 hours ago Ready sync-hgfvb openshift-node 0

To see containers that are currently running, run the crictl ps command:

  1. $ sudo crictl ps
  2. CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT
  3. 376eb13e3cb37 quay.io/coreos/kube-state-metrics... 4 hours ago Running kube-state-metrics 0
  4. 72d61c3d393b5 992ac1a5e7c79d627321dc7877f741a00... 4 hours ago Running kube-rbac-proxy-self 0
  5. 5fa8c93484055 992ac1a5e7c79d627321dc7877f741a00... 4 hours ago Running kube-rbac-proxy-main 0
  6. a2d35508fc0ee quay.io/coreos/kube-rbac-proxy... 4 hours ago Running kube-rbac-proxy 0
  7. 9adda43f3595f docker.io/openshift/prometheus-no... 4 hours ago Running node-exporter 0
  8. 7f4ce5b25cfdb docker.io/openshift/oauth-proxy... 4 hours ago Running alertmanager-proxy 0
  9. 85418badbf6ae quay.io/coreos/configmap-reload... 4 hours ago Running config-reloader 0
  10. 756f20138381c docker.io/openshift/prometheus-al... 4 hours ago Running alertmanager 0
  11. 5e6d8ff4852ba quay.io/coreos/cluster-monitoring... 4 hours ago Running cluster-monitoring- 0
  12. 1c96cfcfa10a7 docker.io/openshift/origin-haprox... 5 hours ago Running route 0
  13. 8f90bb4cded60 docker.io/openshift/origin-node... 5 hours ago Running sdn 0
  14. 59e5fb8514262 docker.io/openshift/origin-node... 5 hours ago Running openvswitch 0
  15. 73323a2c26abe docker.io/openshift/origin-node... 5 hours ago Running sync 0

To see both running containers as well as containers that are stopped or exited, run crictl ps -a:

  1. $ sudo crictl ps -a

If your CRI-O service is stopped or malfunctioning, you can list the containers that were run in CRI-O using the runc command. This example searches for the existence of a container with CRI-O running and not running. It then shows that you can investigate that container with runc, even when CRI-O is stopped:

  1. $ crictl ps | grep d36a99a9a40ec
  2. d36a99a9a40ec 062cd20609d3895658e54e5f367b9d70f42db4f86ca14bae7309512c7e0777fd
  3. 11 hours ago CONTAINER_RUNNING sync 2
  4. $ sudo systemctl stop crio
  5. $ sudo crictl ps | grep d36a99a9a40ec
  6. 2018/10/25 11:22:16 grpc: addrConn.resetTransport failed to create client transport:
  7. connection error: desc = "transport: dial unix /var/run/crio/crio.sock: connect:
  8. no such file or directory"; Reconnecting to {/var/run/crio/crio.sock <nil>}
  9. FATA[0000] listing containers failed: rpc error: code = Unavailable desc = grpc:
  10. the connection is unavailable
  11. $ sudo runc list | grep d36a99a9a40ec
  12. d36a99a9a40ecc4c830f10ed2d5bb3ce1c6deadcb1a4879ff342e315051a71ed 19477 running
  13. /run/containers/storage/overlay-containers/d36a99a9a40ecc4c830f10ed2d5bb3ce1c6deadcb1a4879ff342e315051a71ed/userdata
  14. 2018-10-25T04:44:29.47950187Z root
  15. $ ls /run/containers/storage/overlay-containers/d36*/userdata/
  16. attach config.json ctl pidfile run
  17. $ less /run/containers/storage/overlay-containers/d36*/userdata/config.json
  18. {
  19. "ociVersion": "1.0.0",
  20. "process": {
  21. "user": {
  22. "uid": 0,
  23. "gid": 0
  24. },
  25. "args": [
  26. "/bin/bash",
  27. "-c",
  28. "#!/bin/bash\nset -euo pipefail\n\n# set by the node
  29. image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p);
  30. exit 0' TERM\n\n# track the current state of the ...
  31. $ sudo systemctl start crio

As you can see, even with the CRI-O service off, runc shows the existence of the container and its location in the file system, in case you want to look into it further.

Investigating images, pods, and containers

To find out details about what is happening inside of images, pods or containers for your CRI-O environment, there are several crictl options you can use.

With a container ID in hand (from the output of crictl ps), you can exec a command inside that container. For example, to see the name and release of the operating system inside of a container, run:

  1. $ crictl exec 756f20138381c cat /etc/redhat-release
  2. CentOS Linux release 7.5.1804 (Core)

To see a list of processes running inside of a container, run:

  1. $ crictl exec -t e47b3a837aa30 ps -ef
  2. UID PID PPID C STIME TTY TIME CMD
  3. 1000130+ 1 0 0 Oct17 ? 00:38:14 /usr/bin/origin-web-console --au
  4. 1000130+ 15894 0 0 15:38 pts/0 00:00:00 ps -ef
  5. 1000130+ 17518 1 0 Oct23 ? 00:00:00 [curl] <defunct>

As an alternative, you can “exec” into a container using the runc command:

  1. $ sudo runc exec -t e47b3a837aa3023c748c4c31a090266f014afba641a8ab9cfca31b065b4f2ddd ps -ef
  2. UID PID PPID C STIME TTY TIME CMD
  3. 1000130+ 1 0 0 Oct17 ? 00:38:16 /usr/bin/origin-web-console --audit-log-path=- -v=0 --config=/var/webconsole-config/webc
  4. 1000130+ 16541 0 0 15:48 pts/0 00:00:00 ps -ef
  5. 1000130+ 17518 1 0 Oct23 ? 00:00:00 [curl] <defunct>

If there is no ps command inside the container, runc has the ps option, which has the same effect of showing the processes running in the container:

  1. $ sudo runc ps e47b3a837aa3023c748c4c31a090266f014afba641a8ab9cfca31b065b4f2ddd

Note that runc requires the full container ID, while crictl only needs a few unique characters from the beginning.

With a pod sandbox ID in hand (output from crictl pods), run crictl inspectp to display information about that pod sandbox:

  1. $ sudo crictl pods | grep 5a60ac777aaa0
  2. 5a60ac777aaa0 8 days ago SANDBOX_READY registry-console-1-vktl6 default 0
  3. $ sudo crictl inspectp 5a60ac777aaa0
  4. {
  5. "status": {
  6. "id": "5a60ac777aaa055f14b998a9f2ced3e146b3cddbe270154abb75decd583bf879",
  7. "metadata": {
  8. "attempt": 0,
  9. "name": "registry-console-1-vktl6",
  10. "namespace": "default",
  11. "uid": "6af860cc-d20b-11e8-b094-525400535ba1"
  12. },
  13. "state": "SANDBOX_READY",
  14. "createdAt": "2018-10-17T08:53:22.828511516-04:00",
  15. "network": {
  16. "ip": "10.128.0.6"

To see status information about an image that is available to CRI-O on the local system, run crictl inspecti:

  1. $ sudo crictl inspecti ff5dd2137a4ff
  2. {
  3. "status": {
  4. "id": "ff5dd2137a4ffd5ccb9837d5a0aa0a5d10729f9c186df02e54e58748a32d08b0",
  5. "repoTags": [
  6. "quay.io/coreos/etcd:v3.2.22"
  7. ],
  8. "repoDigests": [
  9. "quay.io/coreos/etcd@sha256:43fbc8a457aa0cb887da63d74a48659e13947cb74b96a53ba8f47abb6172a948"
  10. ],
  11. "size": "37547599",
  12. "username": ""
  13. }
  14. }

Additional resources