Viewing Logging status

You can view the status of the Red Hat OpenShift Logging Operator and other logging subsystem components.

Viewing the status of the Red Hat OpenShift Logging Operator

You can view the status of the Red Hat OpenShift Logging Operator.

Prerequisites

  • The Red Hat OpenShift Logging Operator and OpenShift Elasticsearch Operator are installed.

Procedure

  1. Change to the openshift-logging project by running the following command:

    1. $ oc project openshift-logging
  2. Get the ClusterLogging instance status by running the following command:

    1. $ oc get clusterlogging instance -o yaml

    Example output

    1. apiVersion: logging.openshift.io/v1
    2. kind: ClusterLogging
    3. # ...
    4. status: (1)
    5. collection:
    6. logs:
    7. fluentdStatus:
    8. daemonSet: fluentd (2)
    9. nodes:
    10. collector-2rhqp: ip-10-0-169-13.ec2.internal
    11. collector-6fgjh: ip-10-0-165-244.ec2.internal
    12. collector-6l2ff: ip-10-0-128-218.ec2.internal
    13. collector-54nx5: ip-10-0-139-30.ec2.internal
    14. collector-flpnn: ip-10-0-147-228.ec2.internal
    15. collector-n2frh: ip-10-0-157-45.ec2.internal
    16. pods:
    17. failed: []
    18. notReady: []
    19. ready:
    20. - collector-2rhqp
    21. - collector-54nx5
    22. - collector-6fgjh
    23. - collector-6l2ff
    24. - collector-flpnn
    25. - collector-n2frh
    26. logstore: (3)
    27. elasticsearchStatus:
    28. - ShardAllocationEnabled: all
    29. cluster:
    30. activePrimaryShards: 5
    31. activeShards: 5
    32. initializingShards: 0
    33. numDataNodes: 1
    34. numNodes: 1
    35. pendingTasks: 0
    36. relocatingShards: 0
    37. status: green
    38. unassignedShards: 0
    39. clusterName: elasticsearch
    40. nodeConditions:
    41. elasticsearch-cdm-mkkdys93-1:
    42. nodeCount: 1
    43. pods:
    44. client:
    45. failed:
    46. notReady:
    47. ready:
    48. - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c
    49. data:
    50. failed:
    51. notReady:
    52. ready:
    53. - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c
    54. master:
    55. failed:
    56. notReady:
    57. ready:
    58. - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c
    59. visualization: (4)
    60. kibanaStatus:
    61. - deployment: kibana
    62. pods:
    63. failed: []
    64. notReady: []
    65. ready:
    66. - kibana-7fb4fd4cc9-f2nls
    67. replicaSets:
    68. - kibana-7fb4fd4cc9
    69. replicas: 1
    1In the output, the cluster status fields appear in the status stanza.
    2Information on the Fluentd pods.
    3Information on the Elasticsearch pods, including Elasticsearch cluster health, green, yellow, or red.
    4Information on the Kibana pods.

Example condition messages

The following are examples of some condition messages from the Status.Nodes section of the ClusterLogging instance.

A status message similar to the following indicates a node has exceeded the configured low watermark and no shard will be allocated to this node:

Example output

  1. nodes:
  2. - conditions:
  3. - lastTransitionTime: 2019-03-15T15:57:22Z
  4. message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not
  5. be allocated on this node.
  6. reason: Disk Watermark Low
  7. status: "True"
  8. type: NodeStorage
  9. deploymentName: example-elasticsearch-clientdatamaster-0-1
  10. upgradeStatus: {}

A status message similar to the following indicates a node has exceeded the configured high watermark and shards will be relocated to other nodes:

Example output

  1. nodes:
  2. - conditions:
  3. - lastTransitionTime: 2019-03-15T16:04:45Z
  4. message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated
  5. from this node.
  6. reason: Disk Watermark High
  7. status: "True"
  8. type: NodeStorage
  9. deploymentName: cluster-logging-operator
  10. upgradeStatus: {}

A status message similar to the following indicates the Elasticsearch node selector in the CR does not match any nodes in the cluster:

Example output

  1. Elasticsearch Status:
  2. Shard Allocation Enabled: shard allocation unknown
  3. Cluster:
  4. Active Primary Shards: 0
  5. Active Shards: 0
  6. Initializing Shards: 0
  7. Num Data Nodes: 0
  8. Num Nodes: 0
  9. Pending Tasks: 0
  10. Relocating Shards: 0
  11. Status: cluster health unknown
  12. Unassigned Shards: 0
  13. Cluster Name: elasticsearch
  14. Node Conditions:
  15. elasticsearch-cdm-mkkdys93-1:
  16. Last Transition Time: 2019-06-26T03:37:32Z
  17. Message: 0/5 nodes are available: 5 node(s) didn't match node selector.
  18. Reason: Unschedulable
  19. Status: True
  20. Type: Unschedulable
  21. elasticsearch-cdm-mkkdys93-2:
  22. Node Count: 2
  23. Pods:
  24. Client:
  25. Failed:
  26. Not Ready:
  27. elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49
  28. elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl
  29. Ready:
  30. Data:
  31. Failed:
  32. Not Ready:
  33. elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49
  34. elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl
  35. Ready:
  36. Master:
  37. Failed:
  38. Not Ready:
  39. elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49
  40. elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl
  41. Ready:

A status message similar to the following indicates that the requested PVC could not bind to PV:

Example output

  1. Node Conditions:
  2. elasticsearch-cdm-mkkdys93-1:
  3. Last Transition Time: 2019-06-26T03:37:32Z
  4. Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times)
  5. Reason: Unschedulable
  6. Status: True
  7. Type: Unschedulable

A status message similar to the following indicates that the Fluentd pods cannot be scheduled because the node selector did not match any nodes:

Example output

  1. Status:
  2. Collection:
  3. Logs:
  4. Fluentd Status:
  5. Daemon Set: fluentd
  6. Nodes:
  7. Pods:
  8. Failed:
  9. Not Ready:
  10. Ready:

Viewing the status of logging subsystem components

You can view the status for a number of logging subsystem components.

Prerequisites

  • The Red Hat OpenShift Logging Operator and OpenShift Elasticsearch Operator are installed.

Procedure

  1. Change to the openshift-logging project.

    1. $ oc project openshift-logging
  2. View the status of the logging subsystem for Red Hat OpenShift environment:

    1. $ oc describe deployment cluster-logging-operator

    Example output

    1. Name: cluster-logging-operator
    2. ....
    3. Conditions:
    4. Type Status Reason
    5. ---- ------ ------
    6. Available True MinimumReplicasAvailable
    7. Progressing True NewReplicaSetAvailable
    8. ....
    9. Events:
    10. Type Reason Age From Message
    11. ---- ------ ---- ---- -------
    12. Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1----
  3. View the status of the logging subsystem replica set:

    1. Get the name of a replica set:

      Example output

      1. $ oc get replicaset

      Example output

      1. NAME DESIRED CURRENT READY AGE
      2. cluster-logging-operator-574b8987df 1 1 1 159m
      3. elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m
      4. elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m
      5. elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m
      6. kibana-5bd5544f87 1 1 1 157m
    2. Get the status of the replica set:

      1. $ oc describe replicaset cluster-logging-operator-574b8987df

      Example output

      1. Name: cluster-logging-operator-574b8987df
      2. ....
      3. Replicas: 1 current / 1 desired
      4. Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
      5. ....
      6. Events:
      7. Type Reason Age From Message
      8. ---- ------ ---- ---- -------
      9. Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv----