Log visualization with Kibana

If you are using the ElasticSearch log store, you can use the Kibana console to visualize collected log data.

Using Kibana, you can do the following with your data:

  • Search and browse the data using the Discover tab.

  • Chart and map the data using the Visualize tab.

  • Create and view custom dashboards using the Dashboard tab.

Use and configuration of the Kibana interface is beyond the scope of this documentation. For more information about using the interface, see the Kibana documentation.

The audit logs are not stored in the internal OKD Elasticsearch instance by default. To view the audit logs in Kibana, you must use the Log Forwarding API to configure a pipeline that uses the default output for audit logs.

Defining Kibana index patterns

An index pattern defines the Elasticsearch indices that you want to visualize. To explore and visualize data in Kibana, you must create an index pattern.

Prerequisites

  • A user must have the cluster-admin role, the cluster-reader role, or both roles to view the infra and audit indices in Kibana. The default kubeadmin user has proper permissions to view these indices.

    If you can view the pods and logs in the default, kube- and openshift- projects, you should be able to access these indices. You can use the following command to check if the current user has appropriate permissions:

    1. $ oc auth can-i get pods --subresource log -n <project>

    Example output

    1. yes

    The audit logs are not stored in the internal OKD Elasticsearch instance by default. To view the audit logs in Kibana, you must use the Log Forwarding API to configure a pipeline that uses the default output for audit logs.

  • Elasticsearch documents must be indexed before you can create index patterns. This is done automatically, but it might take a few minutes in a new or updated cluster.

Procedure

To define index patterns and create visualizations in Kibana:

  1. In the OKD console, click the Application Launcher app launcher and select Logging.

  2. Create your Kibana index patterns by clicking ManagementIndex PatternsCreate index pattern:

    • Each user must manually create index patterns when logging into Kibana the first time to see logs for their projects. Users must create an index pattern named app and use the @timestamp time field to view their container logs.

    • Each admin user must create index patterns when logged into Kibana the first time for the app, infra, and audit indices using the @timestamp time field.

  3. Create Kibana Visualizations from the new index patterns.

Viewing cluster logs in Kibana

You view cluster logs in the Kibana web console. The methods for viewing and visualizing your data in Kibana that are beyond the scope of this documentation. For more information, refer to the Kibana documentation.

Prerequisites

  • The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.

  • Kibana index patterns must exist.

  • A user must have the cluster-admin role, the cluster-reader role, or both roles to view the infra and audit indices in Kibana. The default kubeadmin user has proper permissions to view these indices.

    If you can view the pods and logs in the default, kube- and openshift- projects, you should be able to access these indices. You can use the following command to check if the current user has appropriate permissions:

    1. $ oc auth can-i get pods --subresource log -n <project>

    Example output

    1. yes

    The audit logs are not stored in the internal OKD Elasticsearch instance by default. To view the audit logs in Kibana, you must use the Log Forwarding API to configure a pipeline that uses the default output for audit logs.

Procedure

To view logs in Kibana:

  1. In the OKD console, click the Application Launcher app launcher and select Logging.

  2. Log in using the same credentials you use to log in to the OKD console.

    The Kibana interface launches.

  3. In Kibana, click Discover.

  4. Select the index pattern you created from the drop-down menu in the top-left corner: app, audit, or infra.

    The log data displays as time-stamped documents.

  5. Expand one of the time-stamped documents.

  6. Click the JSON tab to display the log entry for that document.

    Sample infrastructure log entry in Kibana

    1. {
    2. "_index": "infra-000001",
    3. "_type": "_doc",
    4. "_id": "YmJmYTBlNDkZTRmLTliMGQtMjE3NmFiOGUyOWM3",
    5. "_version": 1,
    6. "_score": null,
    7. "_source": {
    8. "docker": {
    9. "container_id": "f85fa55bbef7bb783f041066be1e7c267a6b88c4603dfce213e32c1"
    10. },
    11. "kubernetes": {
    12. "container_name": "registry-server",
    13. "namespace_name": "openshift-marketplace",
    14. "pod_name": "redhat-marketplace-n64gc",
    15. "container_image": "registry.redhat.io/redhat/redhat-marketplace-index:v4.7",
    16. "container_image_id": "registry.redhat.io/redhat/redhat-marketplace-index@sha256:65fc0c45aabb95809e376feb065771ecda9e5e59cc8b3024c4545c168f",
    17. "pod_id": "8f594ea2-c866-4b5c-a1c8-a50756704b2a",
    18. "host": "ip-10-0-182-28.us-east-2.compute.internal",
    19. "master_url": "https://kubernetes.default.svc",
    20. "namespace_id": "3abab127-7669-4eb3-b9ef-44c04ad68d38",
    21. "namespace_labels": {
    22. "openshift_io/cluster-monitoring": "true"
    23. },
    24. "flat_labels": [
    25. "catalogsource_operators_coreos_com/update=redhat-marketplace"
    26. ]
    27. },
    28. "message": "time=\"2020-09-23T20:47:03Z\" level=info msg=\"serving registry\" database=/database/index.db port=50051",
    29. "level": "unknown",
    30. "hostname": "ip-10-0-182-28.internal",
    31. "pipeline_metadata": {
    32. "collector": {
    33. "ipaddr4": "10.0.182.28",
    34. "inputname": "fluent-plugin-systemd",
    35. "name": "fluentd",
    36. "received_at": "2020-09-23T20:47:15.007583+00:00",
    37. "version": "1.7.4 1.6.0"
    38. }
    39. },
    40. "@timestamp": "2020-09-23T20:47:03.422465+00:00",
    41. "viaq_msg_id": "YmJmYTBlNDktMDMGQtMjE3NmFiOGUyOWM3",
    42. "openshift": {
    43. "labels": {
    44. "logging": "infra"
    45. }
    46. }
    47. },
    48. "fields": {
    49. "@timestamp": [
    50. "2020-09-23T20:47:03.422Z"
    51. ],
    52. "pipeline_metadata.collector.received_at": [
    53. "2020-09-23T20:47:15.007Z"
    54. ]
    55. },
    56. "sort": [
    57. 1600894023422
    58. ]
    59. }

Configuring Kibana

You can configure using the Kibana console by modifying the ClusterLogging custom resource (CR).

Configuring CPU and memory limits

The logging components allow for adjustments to both the CPU and memory limits.

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    1. $ oc -n openshift-logging edit ClusterLogging instance
    1. apiVersion: "logging.openshift.io/v1"
    2. kind: "ClusterLogging"
    3. metadata:
    4. name: "instance"
    5. namespace: openshift-logging
    6. ...
    7. spec:
    8. managementState: "Managed"
    9. logStore:
    10. type: "elasticsearch"
    11. elasticsearch:
    12. nodeCount: 3
    13. resources: (1)
    14. limits:
    15. memory: 16Gi
    16. requests:
    17. cpu: 200m
    18. memory: 16Gi
    19. storage:
    20. storageClassName: "gp2"
    21. size: "200G"
    22. redundancyPolicy: "SingleRedundancy"
    23. visualization:
    24. type: "kibana"
    25. kibana:
    26. resources: (2)
    27. limits:
    28. memory: 1Gi
    29. requests:
    30. cpu: 500m
    31. memory: 1Gi
    32. proxy:
    33. resources: (2)
    34. limits:
    35. memory: 100Mi
    36. requests:
    37. cpu: 100m
    38. memory: 100Mi
    39. replicas: 2
    40. collection:
    41. logs:
    42. type: "fluentd"
    43. fluentd:
    44. resources: (3)
    45. limits:
    46. memory: 736Mi
    47. requests:
    48. cpu: 200m
    49. memory: 736Mi
    1Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value.
    2Specify the CPU and memory limits and requests for the log visualizer as needed.
    3Specify the CPU and memory limits and requests for the log collector as needed.

Scaling redundancy for the log visualizer nodes

You can scale the pod that hosts the log visualizer for redundancy.

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    1. $ oc edit ClusterLogging instance
    1. apiVersion: "logging.openshift.io/v1"
    2. kind: "ClusterLogging"
    3. metadata:
    4. name: "instance"
    5. ....
    6. spec:
    7. visualization:
    8. type: "kibana"
    9. kibana:
    10. replicas: 1 (1)
    1Specify the number of Kibana nodes.