Updating Logging

There are two types of logging subsystem updates: minor release updates (5.y.z) and major release updates (5.y).

Minor release updates

If you installed the logging subsystem Operators using the Automatic update approval option, your Operators receive minor version updates automatically. You do not need to complete any manual update steps.

If you installed the logging subsystem Operators using the Manual update approval option, you must manually approve minor version updates. For more information, see Manually approving a pending Operator update.

Major release updates

For major version updates you must complete some manual steps.

For major release version compatibility and support information, see OpenShift Operator Life Cycles.

Upgrading the Red Hat OpenShift Logging Operator to watch all namespaces

In logging 5.7 and older versions, the Red Hat OpenShift Logging Operator only watches the openshift-logging namespace. If you want the Red Hat OpenShift Logging Operator to watch all namespaces on your cluster, you must redeploy the Operator. You can complete the following procedure to redeploy the Operator without deleting your logging components.

Prerequisites

  • You have installed the OpenShift CLI (oc).

  • You have administrator permissions.

Procedure

  1. Delete the subscription by running the following command:

    1. $ oc -n openshift-logging delete subscription <subscription>
  2. Delete the Operator group by running the following command:

    1. $ oc -n openshift-logging delete operatorgroup <operator_group_name>
  3. Delete the cluster service version (CSV) by running the following command:

    1. $ oc delete clusterserviceversion cluster-logging.<version>
  4. Redeploy the Red Hat OpenShift Logging Operator by following the “Installing Logging” documentation.

Verification

  • Check that the targetNamespaces field in the OperatorGroup resource is not present or is set to an empty string.

    To do this, run the following command and inspect the output:

    1. $ oc get operatorgroup <operator_group_name> -o yaml

    Example output

    1. apiVersion: operators.coreos.com/v1
    2. kind: OperatorGroup
    3. metadata:
    4. name: openshift-logging-f52cn
    5. namespace: openshift-logging
    6. spec:
    7. upgradeStrategy: Default
    8. status:
    9. namespaces:
    10. - ""
    11. # ...

Updating the Red Hat OpenShift Logging Operator

To update the Red Hat OpenShift Logging Operator to a new major release version, you must modify the update channel for the Operator subscription.

Prerequisites

  • You have installed the Red Hat OpenShift Logging Operator.

  • You have administrator permissions.

  • You have access to the OKD web console and are viewing the Administrator perspective.

Procedure

  1. Navigate to OperatorsInstalled Operators.

  2. Select the openshift-logging project.

  3. Click the Red Hat OpenShift Logging Operator.

  4. Click Subscription. In the Subscription details section, click the Update channel link. This link text might be stable or stable-5.y, depending on your current update channel.

  5. In the Change Subscription Update Channel window, select the latest major version update channel, stable-5.y, and click Save. Note the cluster-logging.v5.y.z version.

Verification

  1. Wait for a few seconds, then click OperatorsInstalled Operators. Verify that the Red Hat OpenShift Logging Operator version matches the latest cluster-logging.v5.y.z version.

  2. On the OperatorsInstalled Operators page, wait for the Status field to report Succeeded.

Updating the Loki Operator

To update the Loki Operator to a new major release version, you must modify the update channel for the Operator subscription.

Prerequisites

  • You have installed the Loki Operator.

  • You have administrator permissions.

  • You have access to the OKD web console and are viewing the Administrator perspective.

Procedure

  1. Navigate to OperatorsInstalled Operators.

  2. Select the openshift-operators-redhat project.

  3. Click the Loki Operator.

  4. Click Subscription. In the Subscription details section, click the Update channel link. This link text might be stable or stable-5.y, depending on your current update channel.

  5. In the Change Subscription Update Channel window, select the latest major version update channel, stable-5.y, and click Save. Note the loki-operator.v5.y.z version.

Verification

  1. Wait for a few seconds, then click OperatorsInstalled Operators. Verify that the Loki Operator version matches the latest loki-operator.v5.y.z version.

  2. On the OperatorsInstalled Operators page, wait for the Status field to report Succeeded.

Updating the OpenShift Elasticsearch Operator

To update the OpenShift Elasticsearch Operator to the current version, you must modify the subscription.

The OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.

Prerequisites

  • If you are using Elasticsearch as the default log store, and Kibana as the UI, update the OpenShift Elasticsearch Operator before you update the Red Hat OpenShift Logging Operator.

    If you update the Operators in the wrong order, Kibana does not update and the Kibana custom resource (CR) is not created. To fix this issue, delete the Red Hat OpenShift Logging Operator pod. When the Red Hat OpenShift Logging Operator pod redeploys, it creates the Kibana CR and Kibana becomes available again.

  • The Logging status is healthy:

    • All pods have a ready status.

    • The Elasticsearch cluster is healthy.

  • Your Elasticsearch and Kibana data is backed up.

  • You have administrator permissions.

  • You have installed the OpenShift CLI (oc) for the verification steps.

Procedure

  1. In the OKD web console, click OperatorsInstalled Operators.

  2. Select the openshift-operators-redhat project.

  3. Click OpenShift Elasticsearch Operator.

  4. Click SubscriptionChannel.

  5. In the Change Subscription Update Channel window, select stable-5.y and click Save. Note the elasticsearch-operator.v5.y.z version.

  6. Wait for a few seconds, then click OperatorsInstalled Operators. Verify that the OpenShift Elasticsearch Operator version matches the latest elasticsearch-operator.v5.y.z version.

  7. On the OperatorsInstalled Operators page, wait for the Status field to report Succeeded.

Verification

  1. Verify that all Elasticsearch pods have a Ready status by entering the following command and observing the output:

    1. $ oc get pod -n openshift-logging --selector component=elasticsearch

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m
    3. elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m
    4. elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m
  2. Verify that the Elasticsearch cluster status is green by entering the following command and observing the output:

    1. $ oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health

    Example output

    1. {
    2. "cluster_name" : "elasticsearch",
    3. "status" : "green",
    4. }
  3. Verify that the Elasticsearch cron jobs are created by entering the following commands and observing the output:

    1. $ oc project openshift-logging
    1. $ oc get cronjob

    Example output

    1. NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
    2. elasticsearch-im-app */15 * * * * False 0 <none> 56s
    3. elasticsearch-im-audit */15 * * * * False 0 <none> 56s
    4. elasticsearch-im-infra */15 * * * * False 0 <none> 56s
  4. Verify that the log store is updated to the correct version and the indices are green by entering the following command and observing the output:

    1. $ oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices

    Verify that the output includes the app-00000x, infra-00000x, audit-00000x, .security indices:

    Sample output with indices in a green status

    1. Tue Jun 30 14:30:54 UTC 2020
    2. health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
    3. green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144
    4. green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148
    5. green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147
    6. green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0
    7. green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158
    8. green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168
    9. green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146
    10. green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145
    11. green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0
    12. green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148
    13. green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148
    14. green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147
    15. green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0
    16. green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0
    17. green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147
    18. green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220
    19. green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0
    20. green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146
    21. green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57
    22. green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9
    23. green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148
    24. green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148
    25. green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0
  5. Verify that the log visualizer is updated to the correct version by entering the following command and observing the output:

    1. $ oc get kibana kibana -o json

    Verify that the output includes a Kibana pod with the ready status:

    Sample output with a ready Kibana pod

    1. [
    2. {
    3. "clusterCondition": {
    4. "kibana-5fdd766ffd-nb2jj": [
    5. {
    6. "lastTransitionTime": "2020-06-30T14:11:07Z",
    7. "reason": "ContainerCreating",
    8. "status": "True",
    9. "type": ""
    10. },
    11. {
    12. "lastTransitionTime": "2020-06-30T14:11:07Z",
    13. "reason": "ContainerCreating",
    14. "status": "True",
    15. "type": ""
    16. }
    17. ]
    18. },
    19. "deployment": "kibana",
    20. "pods": {
    21. "failed": [],
    22. "notReady": []
    23. "ready": []
    24. },
    25. "replicaSets": [
    26. "kibana-5fdd766ffd"
    27. ],
    28. "replicas": 1
    29. }
    30. ]