Forwarding logs to third party systems

By default, cluster logging sends container and infrastructure logs to the default internal Elasticsearch log store defined in the ClusterLogging custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, you do not need to configure the Log Forwarding API.

To send logs to other log aggregators, you use the OKD Log Forwarding API. This API enables you to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. You can send different types of logs to various systems, so different individuals can access each type. You can also enable TLS support to send logs securely, as required by your organization.

To send audit logs to the internal log store, use the Log Forwarding API as described in Forward audit logs to the log store.

When you forward logs externally, the Cluster Logging Operator creates or modifies a Fluentd config map to send logs using your desired protocols. You are responsible for configuring the protocol on the external log aggregator.

Alternatively, you can create a config map to use the Fluentd forward protocol or the syslog protocol to send logs to external systems. However, these methods for forwarding logs are deprecated in OKD and will be removed in a future release.

You cannot use the config map methods and the Log Forwarding API in the same cluster.

About forwarding logs to third-party systems

Forwarding cluster logs to external third-party systems requires a combination of outputs and pipelines specified in a ClusterLogForwarder custom resource (CR) to send logs to specific endpoints inside and outside of your OKD cluster. You can also use inputs to forward the application logs associated with a specific project to an endpoint.

  • An output is the destination for log data that you define, or where you want the logs sent. An output can be one of the following types:

    • elasticsearch. An external Elasticsearch 6 (all releases) instance. The elasticsearch output can use a TLS connection.

    • fluentdForward. An external log aggregation solution that supports Fluentd. This option uses the Fluentd forward protocols. The fluentForward output can use a TCP or TLS connection and supports shared-key authentication by providing a shared_key field in a secret. Shared-key authentication can be used with or without TLS.

    • syslog. An external log aggregation solution that supports the syslog RFC3164 or RFC5424 protocols. The syslog output can use a UDP, TCP, or TLS connection.

    • kafka. A Kafka broker. The kafka output can use a TCP or TLS connection.

    • default. The internal OKD Elasticsearch instance. You are not required to configure the default output. If you do configure a default output, you receive an error message because the default output is reserved for the Cluster Logging Operator.

    If the output URL scheme requires TLS (HTTPS, TLS, or UDPS), then TLS server-side authentication is enabled. To also enable client authentication, the output must name a secret in the openshift-logging project. The secret must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent.

  • A pipeline defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following:

    • application. Container logs generated by user applications running in the cluster, except infrastructure container applications.

    • infrastructure. Container logs from pods that run in the openshift*, kube*, or default projects and journal logs sourced from node file system.

    • audit. Logs generated by auditd, the node audit system, and the audit logs from the Kubernetes API server and the OpenShift API server.

    You can add labels to outbound log messages by using key:value pairs in the pipeline. For example, you might add a label to messages that are forwarded to others data centers or label the logs by type. Labels that are added to objects are also forwarded with the log message.

  • An input forwards the application logs associated with a specific project to a pipeline.

In the pipeline, you define which log types to forward using an inputRef parameter and where to forward the logs to using an outputRef parameter.

Note the following:

  • If a ClusterLogForwarder object exists, logs are not forwarded to the default Elasticsearch instance, unless there is a pipeline with the default output.

  • By default, cluster logging sends container and infrastructure logs to the default internal Elasticsearch log store defined in the ClusterLogging custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, do not configure the Log Forwarding API.

  • If you do not define a pipeline for a log type, the logs of the undefined types are dropped. For example, if you specify a pipeline for the application and audit types, but do not specify a pipeline for the infrastructure type, infrastructure logs are dropped.

  • You can use multiple types of outputs in the ClusterLogForwarder custom resource (CR) to send logs to servers that support different protocols.

  • The internal OKD Elasticsearch instance does not provide secure storage for audit logs. We recommend you ensure that the system to which you forward audit logs is compliant with your organizational and governmental regulations and is properly secured. OKD cluster logging does not comply with those regulations.

  • You are responsible for creating and maintaining any additional configurations that external destinations might require, such as keys and secrets, service accounts, port openings, or global proxy configuration.

The following example forwards the audit logs to a secure external Elasticsearch instance, the infrastructure logs to an insecure external Elasticsearch instance, the application logs to a Kafka broker, and the application logs from the my-apps-logs project to the internal Elasticsearch instance.

Sample log forwarding outputs and pipelines

  1. apiVersion: "logging.openshift.io/v1"
  2. kind: ClusterLogForwarder
  3. metadata:
  4. name: instance (1)
  5. namespace: openshift-logging (2)
  6. spec:
  7. outputs:
  8. - name: elasticsearch-secure (3)
  9. type: "elasticsearch"
  10. url: https://elasticsearch.secure.com:9200
  11. secret:
  12. name: elasticsearch
  13. - name: elasticsearch-insecure (4)
  14. type: "elasticsearch"
  15. url: http://elasticsearch.insecure.com:9200
  16. - name: kafka-app (5)
  17. type: "kafka"
  18. url: tls://kafka.secure.com:9093/app-topic
  19. inputs: (6)
  20. - name: my-app-logs
  21. application:
  22. namespaces:
  23. - my-project
  24. pipelines:
  25. - name: audit-logs (7)
  26. inputRefs:
  27. - audit
  28. outputRefs:
  29. - elasticsearch-secure
  30. - default
  31. labels:
  32. secure: "true" (8)
  33. datacenter: "east"
  34. - name: infrastructure-logs (9)
  35. inputRefs:
  36. - infrastructure
  37. outputRefs:
  38. - elasticsearch-insecure
  39. labels:
  40. datacenter: "west"
  41. - name: my-app (10)
  42. inputRefs:
  43. - my-app-logs
  44. outputRefs:
  45. - default
  46. - inputRefs: (11)
  47. - application
  48. outputRefs:
  49. - kafka-app
  50. labels:
  51. datacenter: "south"
1The name of the ClusterLogForwarder CR must be instance.
2The namespace for the ClusterLogForwarder CR must be openshift-logging.
3Configuration for an secure Elasticsearch output using a secret with a secure URL.
  • A name to describe the output.

  • The type of output: elasticsearch.

  • The secure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix.

  • The secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project.

4Configuration for an insecure Elasticsearch output:
  • A name to describe the output.

  • The type of output: elasticsearch.

  • The insecure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix.

5Configuration for a Kafka output using a client-authenticated TLS communication over a secure URL
  • A name to describe the output.

  • The type of output: kafka.

  • Specify the URL and port of the Kafka broker as a valid absolute URL, including the prefix.

6Configuration for an input to filter application logs from the my-namespace project.
7Configuration for a pipeline to send audit logs to the secure external Elasticsearch instance:
  • Optional. A name to describe the pipeline.

  • The inputRefs is the log type, in this example audit.

  • The outputRefs is the name of the output to use, in this example elasticsearch-secure to forward to the secure Elasticsearch instance and default to forward to the internal Elasticsearch instance.

  • Optional: Labels to add to the logs.

8Optional: String. One or more labels to add to the logs. Quote values like “true” so they are recognized as string values, not as a boolean.
9Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance.
10Configuration for a pipeline to send logs from the my-project project to the internal Elasticsearch instance.
  • Optional. A name to describe the pipeline.

  • The inputRefs is a specific input: my-app-logs.

  • The outputRefs is default.

  • Optional: String. One or more labels to add to the logs.

11Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name:
  • The inputRefs is the log type, in this example application.

  • The outputRefs is the name of the output to use.

  • Optional: String. One or more labels to add to the logs.

Fluentd log handling when the external log aggregator is unavailable

If your external logging aggregator becomes unavailable and cannot receive logs, Fluentd continues to collect logs and stores them in a buffer. When the log aggregator becomes available, log forwarding resumes, including the buffered logs. If the buffer fills completely, Fluentd stops collecting logs. OKD rotates the logs and deletes them. You cannot adjust the buffer size or add a persistent volume claim (PVC) to the Fluentd daemon set or pods.

Forwarding logs to an external Elasticsearch instance

You can optionally forward logs to an external Elasticsearch instance in addition to, or instead of, the internal OKD Elasticsearch instance. You are responsible for configuring the external log aggregator to receive log data from OKD.

To configure log forwarding to an external Elasticsearch instance, create a ClusterLogForwarder custom resource (CR) with an output to that instance and a pipeline that uses the output. The external Elasticsearch output can use the HTTP (insecure) or HTTPS (secure HTTP) connection.

To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the default output to forward logs to the internal instance. You do not need to create a default output. If you do configure a default output, you receive an error message because the default output is reserved for the Cluster Logging Operator.

If you want to forward logs to only the internal OKD Elasticsearch instance, you do not need to create a ClusterLogForwarder CR.

Prerequisites

  • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

Procedure

  1. Create a ClusterLogForwarder CR YAML file similar to the following:

    1. apiVersion: "logging.openshift.io/v1"
    2. kind: ClusterLogForwarder
    3. metadata:
    4. name: instance (1)
    5. namespace: openshift-logging (2)
    6. spec:
    7. outputs:
    8. - name: elasticsearch-insecure (3)
    9. type: "elasticsearch" (4)
    10. url: http://elasticsearch.insecure.com:9200 (5)
    11. - name: elasticsearch-secure
    12. type: "elasticsearch"
    13. url: https://elasticsearch.secure.com:9200
    14. secret:
    15. name: es-secret (6)
    16. pipelines:
    17. - name: application-logs (7)
    18. inputRefs: (8)
    19. - application
    20. - audit
    21. outputRefs:
    22. - elasticsearch-secure (9)
    23. - default (10)
    24. labels:
    25. myLabel: "myValue" (11)
    26. - name: infrastructure-audit-logs (12)
    27. inputRefs:
    28. - infrastructure
    29. outputRefs:
    30. - elasticsearch-insecure
    31. labels:
    32. logs: "audit-infra"
    1The name of the ClusterLogForwarder CR must be instance.
    2The namespace for the ClusterLogForwarder CR must be openshift-logging.
    3Specify a name for the output.
    4Specify the elasticsearch type.
    5Specify the URL and port of the external Elasticsearch instance as a valid absolute URL. You can use the http (insecure) or https (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address.
    6If using an https prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent.
    7Optional: Specify a name for the pipeline.
    8Specify which log types should be forwarded using that pipeline: application, infrastructure, or audit.
    9Specify the output to use with that pipeline for forwarding the logs.
    10Optional: Specify the default output to send the logs to the internal Elasticsearch instance.
    11Optional: String. One or more labels to add to the logs.
    12Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
    • Optional. A name to describe the pipeline.

    • The inputRefs is the log type to forward using that pipeline: application, infrastructure, or audit.

    • The outputRefs is the name of the output to use.

    • Optional: String. One or more labels to add to the logs.

  2. Create the CR object:

    1. $ oc create -f <file-name>.yaml

The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd pods to force them to redeploy.

  1. $ oc delete pod --selector logging-infra=fluentd

Forwarding logs using the Fluentd forward protocol

You can use the Fluentd forward protocol to send a copy of your logs to an external log aggregator that you have configured to accept the protocol. You can do this in addition to, or instead of, using the default Elasticsearch log store. You must also configure the external log aggregator to receive log data from OKD.

To configure log forwarding using the forward protocol, create a ClusterLogForwarder custom resource (CR) with one or more outputs to the Fluentd servers and pipelines that use those outputs. The Fluentd output can use a TCP (insecure) or TLS (secure TCP) connection.

Alternately, you can use a config map to forward logs using the forward protocols. However, this method is deprecated in OKD and will be removed in a future release.

Prerequisites

  • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

Procedure

  1. Create a ClusterLogForwarder CR YAML file similar to the following:

    1. apiVersion: logging.openshift.io/v1
    2. kind: ClusterLogForwarder
    3. metadata:
    4. name: instance (1)
    5. namespace: openshift-logging (2)
    6. spec:
    7. outputs:
    8. - name: fluentd-server-secure (3)
    9. type: fluentdForward (4)
    10. url: 'tls://fluentdserver.security.example.com:24224' (5)
    11. secret: (6)
    12. name: fluentd-secret
    13. - name: fluentd-server-insecure
    14. type: fluentdForward
    15. url: 'tcp://fluentdserver.home.example.com:24224'
    16. pipelines:
    17. - name: forward-to-fluentd-secure (7)
    18. inputRefs: (8)
    19. - application
    20. - audit
    21. outputRefs:
    22. - fluentd-server-secure (9)
    23. - default (10)
    24. labels:
    25. clusterId: "C1234" (11)
    26. - name: forward-to-fluentd-insecure (12)
    27. inputRefs:
    28. - infrastructure
    29. outputRefs:
    30. - fluentd-server-insecure
    31. labels:
    32. clusterId: "C1234"
    1The name of the ClusterLogForwarder CR must be instance.
    2The namespace for the ClusterLogForwarder CR must be openshift-logging.
    3Specify a name for the output.
    4Specify the fluentdForward type.
    5Specify the URL and port of the external Fluentd instance as a valid absolute URL. You can use the tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
    6If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent.
    7Optional. Specify a name for the pipeline.
    8Specify which log types should be forwarded using that pipeline: application, infrastructure, or audit.
    9Specify the output to use with that pipeline for forwarding the logs.
    10Optional. Specify the default output to forward logs to the internal Elasticsearch instance.
    11Optional: String. One or more labels to add to the logs.
    12Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
    • Optional. A name to describe the pipeline.

    • The inputRefs is the log type to forward using that pipeline: application, infrastructure, or audit.

    • The outputRefs is the name of the output to use.

    • Optional: String. One or more labels to add to the logs.

  2. Create the CR object:

    1. $ oc create -f <file-name>.yaml

The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd pods to force them to redeploy.

  1. $ oc delete pod --selector logging-infra=fluentd

Forwarding logs using the syslog protocol

You can use the syslog RFC3164 or RFC5424 protocol to send a copy of your logs to an external log aggregator configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from OKD.

To configure log forwarding using the syslog protocol, create a ClusterLogForwarder custom resource (CR) with one or more outputs to the syslog servers and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection.

Alternately, you can use a config map to forward logs using the syslog RFC3164 protocols. However, this method is deprecated in OKD and will be removed in a future release.

Prerequisites

  • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

Procedure

  1. Create a ClusterLogForwarder CR YAML file similar to the following:

    1. apiVersion: logging.openshift.io/v1
    2. kind: ClusterLogForwarder
    3. metadata:
    4. name: instance (1)
    5. namespace: openshift-logging (2)
    6. spec:
    7. outputs:
    8. - name: rsyslog-east (3)
    9. type: syslog (4)
    10. syslog: (5)
    11. facility: local0
    12. rfc: RFC3164
    13. payloadKey: message
    14. severity: informational
    15. url: 'tls://rsyslogserver.east.example.com:514' (6)
    16. secret: (7)
    17. name: syslog-secret
    18. - name: rsyslog-west
    19. type: syslog
    20. syslog:
    21. appName: myapp
    22. facility: user
    23. msgID: mymsg
    24. procID: myproc
    25. rfc: RFC5424
    26. severity: debug
    27. url: 'udp://rsyslogserver.west.example.com:514'
    28. pipelines:
    29. - name: syslog-east (8)
    30. inputRefs: (9)
    31. - audit
    32. - application
    33. outputRefs: (10)
    34. - rsyslog-east
    35. - default (11)
    36. labels:
    37. secure: "true" (12)
    38. syslog: "east"
    39. - name: syslog-west (13)
    40. inputRefs:
    41. - infrastructure
    42. outputRefs:
    43. - rsyslog-west
    44. - default
    45. labels:
    46. syslog: "west"
    1The name of the ClusterLogForwarder CR must be instance.
    2The namespace for the ClusterLogForwarder CR must be openshift-logging.
    3Specify a name for the output.
    4Specify the syslog type.
    5Optional. Specify the syslog parameters, listed below.
    6Specify the URL and port of the external syslog instance. You can use the udp (insecure), tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
    7If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent.
    8Optional: Specify a name for the pipeline.
    9Specify which log types should be forwarded using that pipeline: application, infrastructure, or audit.
    10Specify the output to use with that pipeline for forwarding the logs.
    11Optional: Specify the default output to forward logs to the internal Elasticsearch instance.
    12Optional: String. One or more labels to add to the logs. Quote values like “true” so they are recognized as string values, not as a boolean.
    13Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
    • Optional. A name to describe the pipeline.

    • The inputRefs is the log type to forward using that pipeline: application, infrastructure, or audit.

    • The outputRefs is the name of the output to use.

    • Optional: String. One or more labels to add to the logs.

  2. Create the CR object:

    1. $ oc create -f <file-name>.yaml

The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd pods to force them to redeploy.

  1. $ oc delete pod --selector logging-infra=fluentd

Syslog parameters

You can configure the following for the syslog outputs. For more information, see the syslog RFC3164 or RFC5424 RFC.

  • facility: The syslog facility. The value can be a decimal integer or a case-insensitive keyword:

    • 0 or kern for kernel messages

    • 1 or user for user-level messages, the default.

    • 2 or mail for the mail system

    • 3 or daemon for system daemons

    • 4 or auth for security/authentication messages

    • 5 or syslog for messages generated internally by syslogd

    • 6 or lpr for line printer subsystem

    • 7 or news for the network news subsystem

    • 8 or uucp for the UUCP subsystem

    • 9 or cron for the clock daemon

    • 10 or authpriv for security authentication messages

    • 11 or ftp for the FTP daemon

    • 12 or ntp for the NTP subsystem

    • 13 or security for the syslog audit log

    • 14 or console for the syslog alert log

    • 15 or solaris-cron for the scheduling daemon

    • 1623 or local0local7 for locally used facilities

  • Optional. payloadKey: The record field to use as payload for the syslog message.

    Configuring the payloadKey parameter prevents other parameters from being forwarded to the syslog.

  • rfc: The RFC to be used for sending log using syslog. The default is RFC5424.

  • severity: The syslog severity to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword:

    • 0 or Emergency for messages indicating the system is unusable

    • 1 or Alert for messages indicating action must be taken immediately

    • 2 or Critical for messages indicating critical conditions

    • 3 or Error for messages indicating error conditions

    • 4 or Warning for messages indicating warning conditions

    • 5 or Notice for messages indicating normal but significant conditions

    • 6 or Informational for messages indicating informational messages

    • 7 or Debug for messages indicating debug-level messages, the default

  • tag: Tag specifies a record field to use as tag on the syslog message.

  • trimPrefix: Remove the specified prefix from the tag.

Additional RFC5424 syslog parameters

The following parameters apply to RFC5424:

  • appName: The APP-NAME is a free-text string that identifies the application that sent the log. Must be specified for RFC5424.

  • msgID: The MSGID is a free-text string that identifies the type of message. Must be specified for RFC5424.

  • procID: The PROCID is a free-text string. A a change in the value indicates a discontinuity in syslog reporting. Must be specified for RFC5424.

Forwarding logs to a Kafka broker

You can forward logs to an external Kafka broker in addition to, or instead of, the default Elasticsearch log store.

To configure log forwarding to an external Kafka instance, create a ClusterLogForwarder custom resource (CR) with an output to that instance and a pipeline that uses the output. You can include a specific Kafka topic in the output or use the default. The Kafka output can use a TCP (insecure) or TLS (secure TCP) connection.

Procedure

  1. Create a ClusterLogForwarder CR YAML file similar to the following:

    1. apiVersion: logging.openshift.io/v1
    2. kind: ClusterLogForwarder
    3. metadata:
    4. name: instance (1)
    5. namespace: openshift-logging (2)
    6. spec:
    7. outputs:
    8. - name: app-logs (3)
    9. type: kafka (4)
    10. url: tls://kafka.example.devlab.com:9093/app-topic (5)
    11. secret:
    12. name: kafka-secret (6)
    13. - name: infra-logs
    14. type: kafka
    15. url: tcp://kafka.devlab2.example.com:9093/infra-topic (7)
    16. - name: audit-logs
    17. type: kafka
    18. url: tls://kafka.qelab.example.com:9093/audit-topic
    19. secret:
    20. name: kafka-secret-qe
    21. pipelines:
    22. - name: app-topic (8)
    23. inputRefs: (9)
    24. - application
    25. outputRefs: (10)
    26. - app-logs
    27. labels:
    28. logType: "application" (11)
    29. - name: infra-topic (12)
    30. inputRefs:
    31. - infrastructure
    32. outputRefs:
    33. - infra-logs
    34. labels:
    35. logType: "infra"
    36. - name: audit-topic
    37. inputRefs:
    38. - audit
    39. outputRefs:
    40. - audit-logs
    41. - default (13)
    42. labels:
    43. logType: "audit"
    1The name of the ClusterLogForwarder CR must be instance.
    2The namespace for the ClusterLogForwarder CR must be openshift-logging.
    3Specify a name for the output.
    4Specify the kafka type.
    5Specify the URL and port of the Kafka broker as a valid absolute URL, optionally with a specific topic. You can use the tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
    6If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent.
    7Optional: To send an insecure output, use a tcp prefix in front of the URL. Also omit the secret key and its name from this output.
    8Optional: Specify a name for the pipeline.
    9Specify which log types should be forwarded using that pipeline: application, infrastructure, or audit.
    10Specify the output to use with that pipeline for forwarding the logs.
    11Optional: String. One or more labels to add to the logs.
    12Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
    • Optional. A name to describe the pipeline.

    • The inputRefs is the log type to forward using that pipeline: application, infrastructure, or audit.

    • The outputRefs is the name of the output to use.

    • Optional: String. One or more labels to add to the logs.

    13Optional: Specify default to forward logs to the internal Elasticsearch instance.
  2. Optional: To forward a single output to multiple kafka brokers, specify an array of kafka brokers as shown in this example:

    1. ...
    2. spec:
    3. outputs:
    4. - name: app-logs
    5. type: kafka
    6. secret:
    7. name: kafka-secret-dev
    8. kafka: (1)
    9. brokers: (2)
    10. - tls://kafka-broker1.example.com:9093/
    11. - tls://kafka-broker2.example.com:9093/
    12. topic: app-topic (3)
    13. ...
    1Specify a kafka key that has a brokers and topic key.
    2Use the brokers key to specify an array of one or more brokers.
    3Use the topic key to specify the target topic that will receive the logs.
  3. Create the CR object:

    1. $ oc create -f <file-name>.yaml

The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd pods to force them to redeploy.

  1. $ oc delete pod --selector logging-infra=fluentd

Forwarding application logs from specific projects

You can use the Cluster Log Forwarder to send a copy of the application logs from specific projects to an external log aggregator. You can do this in addition to, or instead of, using the default Elasticsearch log store. You must also configure the external log aggregator to receive log data from OKD.

To configure forwarding application logs from a project, create a ClusterLogForwarder custom resource (CR) with at least one input from a project, optional outputs for other log aggregators, and pipelines that use those inputs and outputs.

Prerequisites

  • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

Procedure

  1. Create a ClusterLogForwarder CR YAML file similar to the following:

    1. apiVersion: logging.openshift.io/v1
    2. kind: ClusterLogForwarder
    3. metadata:
    4. name: instance (1)
    5. namespace: openshift-logging (2)
    6. spec:
    7. outputs:
    8. - name: fluentd-server-secure (3)
    9. type: fluentdForward (4)
    10. url: 'tls://fluentdserver.security.example.com:24224' (5)
    11. secret: (6)
    12. name: fluentd-secret
    13. - name: fluentd-server-insecure
    14. type: fluentdForward
    15. url: 'tcp://fluentdserver.home.example.com:24224'
    16. inputs: (7)
    17. - name: my-app-logs
    18. application:
    19. namespaces:
    20. - my-project
    21. pipelines:
    22. - name: forward-to-fluentd-insecure (8)
    23. inputRefs: (9)
    24. - my-app-logs
    25. outputRefs: (10)
    26. - fluentd-server-insecure
    27. labels: (11)
    28. project: "my-project"
    29. - name: forward-to-fluentd-secure (12)
    30. inputRefs:
    31. - application
    32. - audit
    33. - infrastructure
    34. outputRefs:
    35. - fluentd-server-secure
    36. - default
    37. labels:
    38. clusterId: "C1234"
    1The name of the ClusterLogForwarder CR must be instance.
    2The namespace for the ClusterLogForwarder CR must be openshift-logging.
    3Specify a name for the output.
    4Specify the output type: elasticsearch, fluentdForward, syslog, or kafka.
    5Specify the URL and port of the external log aggregator as a valid absolute URL. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
    6If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent.
    7Configuration for an input to filter application logs from the specified projects.
    8Configuration for a pipeline to use the input to send project application logs to an external Fluentd instance.
    9The my-app-logs input.
    10The name of the output to use.
    11Optional: String. One or more labels to add to the logs.
    12Configuration for a pipeline to send logs to other log aggregators.
    • Optional: Specify a name for the pipeline.

    • Specify which log types should be forwarded using that pipeline: application, infrastructure, or audit.

    • Specify the output to use with that pipeline for forwarding the logs.

    • Optional: Specify the default output to forward logs to the internal Elasticsearch instance.

    • Optional: String. One or more labels to add to the logs.

  2. Create the CR object:

    1. $ oc create -f <file-name>.yaml

Forwarding logs using the legacy Fluentd method

You can use the Fluentd forward protocol to send logs to destinations outside of your OKD cluster by creating a configuration file and config map. You are responsible for configuring the external log aggregator to receive log data from OKD.

This method for forwarding logs is deprecated in OKD and will be removed in a future release.

The forward protocols are provided with the Fluentd image as of v1.4.0.

To send logs using the Fluentd forward protocol, create a configuration file called secure-forward.conf, that points to an external log aggregator. Then, use that file to create a config map called called secure-forward in the openshift-logging project, which OKD uses when forwarding the logs.

Prerequisites

  • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

Sample Fluentd configuration file

  1. <store>
  2. @type forward
  3. <security>
  4. self_hostname ${hostname}
  5. shared_key "fluent-receiver"
  6. </security>
  7. transport tls
  8. tls_verify_hostname false
  9. tls_cert_path '/etc/ocp-forward/ca-bundle.crt'
  10. <buffer>
  11. @type file
  12. path '/var/lib/fluentd/secureforwardlegacy'
  13. queued_chunks_limit_size "1024"
  14. chunk_limit_size "1m"
  15. flush_interval "5s"
  16. flush_at_shutdown "false"
  17. flush_thread_count "2"
  18. retry_max_interval "300"
  19. retry_forever true
  20. overflow_action "#{ENV['BUFFER_QUEUE_FULL_ACTION'] || 'throw_exception'}"
  21. </buffer>
  22. <server>
  23. host fluent-receiver.example.com
  24. port 24224
  25. </server>
  26. </store>

Procedure

To configure OKD to forward logs using the legacy Fluentd method:

  1. Create a configuration file named secure-forward and specify parameters similar to the following within the <store> stanza:

    1. <store>
    2. @type forward
    3. <security>
    4. self_hostname ${hostname}
    5. shared_key <key> (1)
    6. </security>
    7. transport tls (2)
    8. tls_verify_hostname <value> (3)
    9. tls_cert_path <path_to_file> (4)
    10. <buffer> (5)
    11. @type file
    12. path '/var/lib/fluentd/secureforwardlegacy'
    13. queued_chunks_limit_size "#{ENV['BUFFER_QUEUE_LIMIT'] || '1024' }"
    14. chunk_limit_size "#{ENV['BUFFER_SIZE_LIMIT'] || '1m' }"
    15. flush_interval "#{ENV['FORWARD_FLUSH_INTERVAL'] || '5s'}"
    16. flush_at_shutdown "#{ENV['FLUSH_AT_SHUTDOWN'] || 'false'}"
    17. flush_thread_count "#{ENV['FLUSH_THREAD_COUNT'] || 2}"
    18. retry_max_interval "#{ENV['FORWARD_RETRY_WAIT'] || '300'}"
    19. retry_forever true
    20. </buffer>
    21. <server>
    22. name (6)
    23. host (7)
    24. hostlabel (8)
    25. port (9)
    26. </server>
    27. <server> (10)
    28. name
    29. host
    30. </server>
    1Enter the shared key between nodes.
    2Specify tls to enable TLS validation.
    3Set to true to verify the server cert hostname. Set to false to ignore server cert hostname.
    4Specify the path to the private CA certificate file as /etc/ocp-forward/ca_cert.pem.
    5Specify the Fluentd buffer parameters as needed.
    6Optionally, enter a name for this server.
    7Specify the hostname or IP of the server.
    8Specify the host label of the server.
    9Specify the port of the server.
    10Optionally, add additional servers. If you specify two or more servers, forward uses these server nodes in a round-robin order.

    To use Mutual TLS (mTLS) authentication, see the Fluentd documentation for information about client certificate, key parameters, and other settings.

  2. Create a config map named secure-forward in the openshift-logging project from the configuration file:

    1. $ oc create configmap secure-forward --from-file=secure-forward.conf -n openshift-logging

The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd pods to force them to redeploy.

  1. $ oc delete pod --selector logging-infra=fluentd

Forwarding logs using the legacy syslog method

You can use the syslog RFC3164 protocol to send logs to destinations outside of your OKD cluster by creating a configuration file and config map. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from OKD.

This method for forwarding logs is deprecated in OKD and will be removed in a future release.

There are two versions of the syslog protocol:

  • out_syslog: The non-buffered implementation, which communicates through UDP, does not buffer data and writes out results immediately.

  • out_syslog_buffered: The buffered implementation, which communicates through TCP and buffers data into chunks.

To send logs using the syslog protocol, create a configuration file called syslog.conf, with the information needed to forward the logs. Then, use that file to create a config map called syslog in the openshift-logging project, which OKD uses when forwarding the logs.

Prerequisites

  • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

Sample syslog configuration file

  1. <store>
  2. @type syslog_buffered
  3. remote_syslog rsyslogserver.example.com
  4. port 514
  5. hostname ${hostname}
  6. remove_tag_prefix tag
  7. facility local0
  8. severity info
  9. use_record true
  10. payload_key message
  11. rfc 3164
  12. </store>

You can configure the following syslog parameters. For more information, see the syslog RFC3164.

  • facility: The syslog facility. The value can be a decimal integer or a case-insensitive keyword:

    • 0 or kern for kernel messages

    • 1 or user for user-level messages, the default.

    • 2 or mail for the mail system

    • 3 or daemon for the system daemons

    • 4 or auth for the security/authentication messages

    • 5 or syslog for messages generated internally by syslogd

    • 6 or lpr for the line printer subsystem

    • 7 or news for the network news subsystem

    • 8 or uucp for the UUCP subsystem

    • 9 or cron for the clock daemon

    • 10 or authpriv for security authentication messages

    • 11 or ftp for the FTP daemon

    • 12 or ntp for the NTP subsystem

    • 13 or security for the syslog audit logs

    • 14 or console for the syslog alert logs

    • 15 or solaris-cron for the scheduling daemon

    • 1623 or local0local7 for locally used facilities

  • payloadKey: The record field to use as payload for the syslog message.

  • rfc: The RFC to be used for sending log using syslog.

  • severity: The syslog severity to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword:

    • 0 or Emergency for messages indicating the system is unusable

    • 1 or Alert for messages indicating action must be taken immediately

    • 2 or Critical for messages indicating critical conditions

    • 3 or Error for messages indicating error conditions

    • 4 or Warning for messages indicating warning conditions

    • 5 or Notice for messages indicating normal but significant conditions

    • 6 or Informational for messages indicating informational messages

    • 7 or Debug for messages indicating debug-level messages, the default

  • tag: The record field to use as tag on the syslog message.

  • trimPrefix: The prefix to remove from the tag.

Procedure

To configure OKD to forward logs using the legacy configuration methods:

  1. Create a configuration file named syslog.conf and specify parameters similar to the following within the <store> stanza:

    1. <store>
    2. @type <type> (1)
    3. remote_syslog <syslog-server> (2)
    4. port 514 (3)
    5. hostname ${hostname}
    6. remove_tag_prefix <prefix> (4)
    7. facility <value>
    8. severity <value>
    9. use_record <value>
    10. payload_key message
    11. rfc 3164 (5)
    12. </store>
    1Specify the protocol to use, either: syslog or syslog_buffered.
    2Specify the FQDN or IP address of the syslog server.
    3Specify the port of the syslog server.
    4Optional: Specify the appropriate syslog parameters, for example:
    • Parameter to remove the specified tag field from the syslog prefix.

    • Parameter to set the specified field as the syslog key.

    • Parameter to specify the syslog log facility or source.

    • Parameter to specify the syslog log severity.

    • Parameter to use the severity and facility from the record if available. If true, the container_name, namespace_name, and pod_name are included in the output content.

    • Parameter to specify the key to set the payload of the syslog message. Defaults to message.

    5With the legacy syslog method, you must specify 3164 for the rfc value.
  2. Create a config map named syslog in the openshift-logging project from the configuration file:

    1. $ oc create configmap syslog --from-file=syslog.conf -n openshift-logging

The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd pods to force them to redeploy.

  1. $ oc delete pod --selector logging-infra=fluentd