Configuring the log visualizer

OKD uses Kibana to display the log data collected by cluster logging.

You can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes.

Configuring CPU and memory limits

The cluster logging components allow for adjustments to both the CPU and memory limits.

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    1. $ oc edit ClusterLogging instance -n openshift-logging
    1. apiVersion: "logging.openshift.io/v1"
    2. kind: "ClusterLogging"
    3. metadata:
    4. name: "instance"
    5. ....
    6. spec:
    7. managementState: "Managed"
    8. logStore:
    9. type: "elasticsearch"
    10. elasticsearch:
    11. nodeCount: 2
    12. resources: (1)
    13. limits:
    14. memory: 2Gi
    15. requests:
    16. cpu: 200m
    17. memory: 2Gi
    18. storage:
    19. storageClassName: "gp2"
    20. size: "200G"
    21. redundancyPolicy: "SingleRedundancy"
    22. visualization:
    23. type: "kibana"
    24. kibana:
    25. resources: (2)
    26. limits:
    27. memory: 1Gi
    28. requests:
    29. cpu: 500m
    30. memory: 1Gi
    31. proxy:
    32. resources: (2)
    33. limits:
    34. memory: 100Mi
    35. requests:
    36. cpu: 100m
    37. memory: 100Mi
    38. replicas: 2
    39. curation:
    40. type: "curator"
    41. curator:
    42. resources: (3)
    43. limits:
    44. memory: 200Mi
    45. requests:
    46. cpu: 200m
    47. memory: 200Mi
    48. schedule: "*/10 * * * *"
    49. collection:
    50. logs:
    51. type: "fluentd"
    52. fluentd:
    53. resources: (4)
    54. limits:
    55. memory: 736Mi
    56. requests:
    57. cpu: 200m
    58. memory: 736Mi
    1Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value.
    2Specify the CPU and memory limits and requests for the log visualizer as needed.
    3Specify the CPU and memory limits and requests for the log curator as needed.
    4Specify the CPU and memory limits and requests for the log collector as needed.

Scaling redundancy for the log visualizer nodes

You can scale the pod that hosts the log visualizer for redundancy.

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    1. $ oc edit ClusterLogging instance
    1. $ oc edit ClusterLogging instance
    2. apiVersion: "logging.openshift.io/v1"
    3. kind: "ClusterLogging"
    4. metadata:
    5. name: "instance"
    6. ....
    7. spec:
    8. visualization:
    9. type: "kibana"
    10. kibana:
    11. replicas: 1 (1)
    1Specify the number of Kibana nodes.