Understanding logging architecture

The logging subsystem consists of these logical components:

  • Collector - Reads container log data from each node and forwards log data to configured outputs.

  • Store - Stores log data for analysis; the default output for the forwarder.

  • Visualization - Graphical interface for searching, querying, and viewing stored logs.

These components are managed by Operators and Custom Resource (CR) YAML files.

The logging subsystem for Red Hat OpenShift collects container logs and node logs. These are categorized into types:

  • application - Container logs generated by non-infrastructure containers.

  • infrastructure - Container logs from namespaces kube-* and openshift-\*, and node logs from journald.

  • audit - Logs from auditd, kube-apiserver, openshift-apiserver, and ovn if enabled.

The logging collector is a daemonset that deploys pods to each OKD node. System and infrastructure logs are generated by journald log messages from the operating system, the container runtime, and OKD.

Container logs are generated by containers running in pods running on the cluster. Each container generates a separate log stream. The collector collects the logs from these sources and forwards them internally or externally as configured in the ClusterLogForwarder custom resource.

Support considerations for logging

Logging is provided as an installable component, with a distinct release cycle from the core OKD. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.

The supported way of configuring the logging subsystem for Red Hat OpenShift is by configuring it using the options described in this documentation. Do not use other configurations, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will disappear because the Operators reconcile any differences. The Operators reverse everything to the defined state by default and by design.

If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged. An unmanaged OpenShift Logging environment is not supported and does not receive updates until you return OpenShift Logging to Managed.

The following modifications are explicitly not supported:

  • Deploying logging to namespaces not specified in the documentation.

  • Installing custom Elasticsearch, Kibana, Fluentd, or Loki instances on OKD.

  • Changes to the Kibana Custom Resource (CR) or Elasticsearch CR.

  • Changes to secrets or config maps not specified in the documentation.

The logging subsystem for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems.

The logging subsystem for Red Hat OpenShift is not:

  • A high scale log collection system

  • Security Information and Event Monitoring (SIEM) compliant

  • Historical or long term log retention or storage

  • A guaranteed log sink

  • Secure storage - audit logs are not stored by default

Table 1. Logging 5.7 outputs
OutputProtocolTested withFluentdVector

Cloudwatch

REST over HTTP(S)

Elasticsearch v6

v6.8.1

Elasticsearch v7

v7.12.2, 7.17.7

Elasticsearch v8

v8.4.3

Fluent Forward

Fluentd forward v1

Fluentd 1.14.6, Logstash 7.10.1

Google Cloud Logging

HTTP

HTTP 1.1

Fluentd 1.14.6, Vector 0.21

Kafka

Kafka 0.11

Kafka 2.4.1, 2.7.0, 3.3.1

Loki

REST over HTTP(S)

Loki 2.3.0, 2.7

Splunk

HEC

v8.2.9, 9.0.0

Syslog

RFC3164, RFC5424

Rsyslog 8.37.0-9.el7