Updating cluster logging

After updating the OKD cluster from 4.4 to 4.5, you can then update the OpenShift Elasticsearch Operator and Cluster Logging Operator from 4.4 to 4.5.

Cluster logging 4.5 introduces a new Elasticsearch version, Elasticsearch 6.8.1, and an enhanced security plug-in, Open Distro for Elasticsearch. The new Elasticsearch version introduces a new Elasticsearch data model, where the Elasticsearch data is indexed only by type: infrastructure, application, and audit. Previously, data was indexed by type (infrastructure and application) and project.

Because of the new data model, the update does not migrate existing custom Kibana index patterns and visualizations into the new version. You must re-create your Kibana index patterns and visualizations to match the new indices after updating.

Due to the nature of these changes, you are not required to update your cluster logging to 4.5. However, when you update to OKD 4.6, you must update cluster logging to 4.6 at that time.

Updating cluster logging

After updating the OKD cluster, you can update cluster logging from 4.5 to 4.6 by changing the subscription for the OpenShift Elasticsearch Operator and the Cluster Logging Operator.

When you update:

  • You must update the OpenShift Elasticsearch Operator before updating the Cluster Logging Operator.

  • You must update both the OpenShift Elasticsearch Operator and the Cluster Logging Operator.

    Kibana is unusable when the OpenShift Elasticsearch Operator has been updated but the Cluster Logging Operator has not been updated.

    If you update the Cluster Logging Operator before the OpenShift Elasticsearch Operator, Kibana does not update and the Kibana custom resource (CR) is not created. To work around this problem, delete the Cluster Logging Operator pod. When the Cluster Logging Operator pod redeploys, the Kibana CR is created.

If your cluster logging version is prior to 4.5, you must upgrade cluster logging to 4.5 before updating to 4.6.

Prerequisites

  • Update the OKD cluster from 4.5 to 4.6.

  • Make sure the cluster logging status is healthy:

    • All pods are ready.

    • The Elasticsearch cluster is healthy.

  • Back up your Elasticsearch and Kibana data.

Procedure

  1. Update the OpenShift Elasticsearch Operator:

    1. From the web console, click OperatorsInstalled Operators.

    2. Select the openshift-operators-redhat project.

    3. Click the OpenShift Elasticsearch Operator.

    4. Click SubscriptionChannel.

    5. In the Change Subscription Update Channel window, select 4.6 and click Save.

    6. Wait for a few seconds, then click OperatorsInstalled Operators.

      The OpenShift Elasticsearch Operator is shown as 4.6. For example:

      1. OpenShift Elasticsearch Operator
      2. 4.6.0-202007012112.p0 provided
      3. by Red Hat, Inc

      Wait for the Status field to report Succeeded.

  2. Update the Cluster Logging Operator:

    1. From the web console, click OperatorsInstalled Operators.

    2. Select the openshift-logging project.

    3. Click the Cluster Logging Operator.

    4. Click SubscriptionChannel.

    5. In the Change Subscription Update Channel window, select 4.6 and click Save.

    6. Wait for a few seconds, then click OperatorsInstalled Operators.

      The Cluster Logging Operator is shown as 4.6. For example:

      1. Cluster Logging
      2. 4.6.0-202007012112.p0 provided
      3. by Red Hat, Inc

      Wait for the Status field to report Succeeded.

  3. Check the logging components:

    1. Ensure that all Elasticsearch pods are in the Ready status:

      1. $ oc get pod -n openshift-logging --selector component=elasticsearch

      Example output

      1. NAME READY STATUS RESTARTS AGE
      2. elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m
      3. elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m
      4. elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m
    2. Ensure that the Elasticsearch cluster is healthy:

      1. $ oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- es_cluster_health
      1. {
      2. "cluster_name" : "elasticsearch",
      3. "status" : "green",
      4. }
      5. ...
    3. Ensure that the Elasticsearch cron jobs are created:

      1. $ oc project openshift-logging
      1. $ oc get cronjob
      1. NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
      2. curator 30 3,9,15,21 * * * False 0 <none> 20s
      3. elasticsearch-im-app */15 * * * * False 0 <none> 56s
      4. elasticsearch-im-audit */15 * * * * False 0 <none> 56s
      5. elasticsearch-im-infra */15 * * * * False 0 <none> 56s
    4. Verify that the log store is updated to 4.6 and the indices are green:

      1. $ oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices

      Verify that the output includes the app-00000x, infra-00000x, audit-00000x, .security indices.

      Sample output with indices in a green status

      1. Tue Jun 30 14:30:54 UTC 2020
      2. health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
      3. green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144
      4. green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148
      5. green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147
      6. green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0
      7. green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158
      8. green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168
      9. green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146
      10. green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145
      11. green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0
      12. green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148
      13. green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148
      14. green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147
      15. green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0
      16. green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0
      17. green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147
      18. green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220
      19. green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0
      20. green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146
      21. green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57
      22. green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9
      23. green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148
      24. green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148
      25. green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0
    5. Verify that the log collector is updated to 4.6:

      1. $ oc get ds fluentd -o json | grep fluentd-init

      Verify that the output includes a fluentd-init container:

      1. "containerName": "fluentd-init"
    6. Verify that the log visualizer is updated to 4.6 using the Kibana CRD:

      1. $ oc get kibana kibana -o json

      Verify that the output includes a Kibana pod with the ready status:

      Sample output with a ready Kibana pod

      1. [
      2. {
      3. "clusterCondition": {
      4. "kibana-5fdd766ffd-nb2jj": [
      5. {
      6. "lastTransitionTime": "2020-06-30T14:11:07Z",
      7. "reason": "ContainerCreating",
      8. "status": "True",
      9. "type": ""
      10. },
      11. {
      12. "lastTransitionTime": "2020-06-30T14:11:07Z",
      13. "reason": "ContainerCreating",
      14. "status": "True",
      15. "type": ""
      16. }
      17. ]
      18. },
      19. "deployment": "kibana",
      20. "pods": {
      21. "failed": [],
      22. "notReady": []
      23. "ready": []
      24. },
      25. "replicaSets": [
      26. "kibana-5fdd766ffd"
      27. ],
      28. "replicas": 1
      29. }
      30. ]
    7. Verify the Curator is updated to 4.6:

      1. $ oc get cronjob -o name
      1. cronjob.batch/curator
      2. cronjob.batch/elasticsearch-im-app
      3. cronjob.batch/elasticsearch-im-audit
      4. cronjob.batch/elasticsearch-im-infra

      Verify that the output includes the elasticsearch-im-* indices.

Post-update tasks

If you use the Log Forwarding API to forward logs, after the OpenShift Elasticsearch Operator and Cluster Logging Operator are fully updated to 4.6, you must replace your LogForwarding custom resource (CR) with a ClusterLogForwarder CR.

Updating log forwarding custom resources

The OKD Log Forward API has been promoted from Technology Preview to Generally Available in OKD 4.6. The GA release contains some improvements and enhancements that require you to make a change to your ClusterLogging custom resource (CR) and to replace your LogForwarding custom resource (CR) with a ClusterLogForwarder CR.

Sample ClusterLogForwarder instance in OKD 4.6

  1. apiVersion: logging.openshift.io/v1
  2. kind: ClusterLogForwarder
  3. metadata:
  4. name: instance
  5. namespace: openshift-logging
  6. ....
  7. spec:
  8. outputs:
  9. - url: http://remote.elasticsearch.com:9200
  10. name: elasticsearch
  11. type: elasticsearch
  12. - url: tls://fluentdserver.example.com:24224
  13. name: fluentd
  14. type: fluentdForward
  15. secret:
  16. name: fluentdserver
  17. pipelines:
  18. - inputRefs:
  19. - infrastructure
  20. - application
  21. name: mylogs
  22. outputRefs:
  23. - elasticsearch
  24. - inputRefs:
  25. - audit
  26. name: auditlogs
  27. outputRefs:
  28. - fluentd
  29. - default
  30. ...

Sample ClusterLogForwarder CR in OKD 4.5

  1. apiVersion: logging.openshift.io/v1alpha1
  2. kind: LogForwarding
  3. metadata:
  4. name: instance
  5. namespace: openshift-logging
  6. spec:
  7. disableDefaultForwarding: true
  8. outputs:
  9. - name: elasticsearch
  10. type: elasticsearch
  11. endpoint: remote.elasticsearch.com:9200
  12. - name: fluentd
  13. type: forward
  14. endpoint: fluentdserver.example.com:24224
  15. secret:
  16. name: fluentdserver
  17. pipelines:
  18. - inputSource: logs.infra
  19. name: infra-logs
  20. outputRefs:
  21. - elasticearch
  22. - inputSource: logs.app
  23. name: app-logs
  24. outputRefs:
  25. - elasticearch
  26. - inputSource: logs.audit
  27. name: audit-logs
  28. outputRefs:
  29. - fluentd

The following procedure shows each parameter you must change.

Procedure

To update the ClusterLogForwarder CR in 4.5 to the ClusterLogForwarding CR for 4.6, make the following modifications:

  1. Edit the ClusterLogging custom resource (CR) to remove the logforwardingtechpreview annotation:

    Sample ClusterLogging CR

    1. apiVersion: "logging.openshift.io/v1"
    2. kind: "ClusterLogging"
    3. metadata:
    4. annotations:
    5. clusterlogging.openshift.io/logforwardingtechpreview: enabled (1)
    6. name: "instance"
    7. namespace: "openshift-logging"
    8. ....
    1Remove the logforwardingtechpreview annotation.
  2. Export the ClusterLogForwarder CR to create a YAML file for the ClusterLogForwarder instance:

    1. $ oc get LogForwarding instance -n openshift-logging -o yaml| tee ClusterLogForwarder.yaml
  3. Edit the YAML file to make the following modifications:

    Sample ClusterLogForwarder instance in OKD 4.6

    1. apiVersion: logging.openshift.io/v1 (1)
    2. kind: ClusterLogForwarder (2)
    3. metadata:
    4. name: instance
    5. namespace: openshift-logging
    6. ....
    7. spec: (3)
    8. outputs:
    9. - url: http://remote.elasticsearch.com:9200 (4)
    10. name: elasticsearch
    11. type: elasticsearch
    12. - url: tls://fluentdserver.example.com:24224
    13. name: fluentd
    14. type: fluentdForward (5)
    15. secret:
    16. name: fluentdserver
    17. pipelines:
    18. - inputRefs: (6)
    19. - infrastructure
    20. - application
    21. name: mylogs
    22. outputRefs:
    23. - elasticsearch
    24. - inputRefs:
    25. - audit
    26. name: auditlogs
    27. outputRefs:
    28. - fluentd
    29. - default (7)
    30. ...
    1Change the apiVersion from “logging.openshift.io/v1alpha1” to “logging.openshift.io/v1”.
    2Change the object kind from kind: “LogForwarding” to kind: “ClusterLogForwarder”.
    3Remove the disableDefaultForwarding: true parameter.
    4Change the output parameter from spec.outputs.endpoint to spec.outputs.url. Add a prefix to the URL, such as https://, tcp://, and so forth, if a prefix is not present.
    5For Fluentd outputs, change the type from forward to fluentdForward.
    6Change the pipelines:
    • Change spec.pipelines.inputSource to spec.pipelines.inputRefs

    • Change logs.infra to infrastructure

    • Change logs.app to application

    • Change logs.audit to audit

    7Optional: Add a default pipeline to send logs to the internal Elasticsearch instance. You are not required to configure a default output.

    If you want to forward logs to only the internal OKD Elasticsearch instance, do not configure the Log Forwarding API.

  4. Create the CR object:

    1. $ oc create -f ClusterLogForwarder.yaml

For information on the new capabilities of the Log Forwarding API, see Forwarding logs to third party systems.