Moving logging subsystem resources with node selectors

You can use node selectors to deploy the Elasticsearch and Kibana pods to different nodes.

Moving OpenShift Logging resources

You can configure the Cluster Logging Operator to deploy the pods for logging subsystem components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Cluster Logging Operator pod from its installed location.

For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.

Prerequisites

  • The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. These features are not installed by default.

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    1. $ oc edit ClusterLogging instance
    1. apiVersion: logging.openshift.io/v1
    2. kind: ClusterLogging
    3. ...
    4. spec:
    5. collection:
    6. logs:
    7. fluentd:
    8. resources: null
    9. type: fluentd
    10. logStore:
    11. elasticsearch:
    12. nodeCount: 3
    13. nodeSelector: (1)
    14. node-role.kubernetes.io/infra: ''
    15. tolerations:
    16. - effect: NoSchedule
    17. key: node-role.kubernetes.io/infra
    18. value: reserved
    19. - effect: NoExecute
    20. key: node-role.kubernetes.io/infra
    21. value: reserved
    22. redundancyPolicy: SingleRedundancy
    23. resources:
    24. limits:
    25. cpu: 500m
    26. memory: 16Gi
    27. requests:
    28. cpu: 500m
    29. memory: 16Gi
    30. storage: {}
    31. type: elasticsearch
    32. managementState: Managed
    33. visualization:
    34. kibana:
    35. nodeSelector: (1)
    36. node-role.kubernetes.io/infra: ''
    37. tolerations:
    38. - effect: NoSchedule
    39. key: node-role.kubernetes.io/infra
    40. value: reserved
    41. - effect: NoExecute
    42. key: node-role.kubernetes.io/infra
    43. value: reserved
    44. proxy:
    45. resources: null
    46. replicas: 1
    47. resources: null
    48. type: kibana
    49. ...
    1Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.

Verification

To verify that a component has moved, you can use the oc get pod -o wide command.

For example:

  • You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal node:

    1. $ oc get pod kibana-5b8bdf44f9-ccpq9 -o wide

    Example output

    1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    2. kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>
  • You want to move the Kibana pod to the ip-10-0-139-48.us-east-2.compute.internal node, a dedicated infrastructure node:

    1. $ oc get nodes

    Example output

    1. NAME STATUS ROLES AGE VERSION
    2. ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.27.3
    3. ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.27.3
    4. ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.27.3
    5. ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.27.3
    6. ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.27.3
    7. ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.27.3
    8. ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.27.3

    Note that the node has a node-role.kubernetes.io/infra: '' label:

    1. $ oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml

    Example output

    1. kind: Node
    2. apiVersion: v1
    3. metadata:
    4. name: ip-10-0-139-48.us-east-2.compute.internal
    5. selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal
    6. uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751
    7. resourceVersion: '39083'
    8. creationTimestamp: '2020-04-13T19:07:55Z'
    9. labels:
    10. node-role.kubernetes.io/infra: ''
    11. ...
  • To move the Kibana pod, edit the ClusterLogging CR to add a node selector:

    1. apiVersion: logging.openshift.io/v1
    2. kind: ClusterLogging
    3. ...
    4. spec:
    5. ...
    6. visualization:
    7. kibana:
    8. nodeSelector: (1)
    9. node-role.kubernetes.io/infra: ''
    10. proxy:
    11. resources: null
    12. replicas: 1
    13. resources: null
    14. type: kibana
    1Add a node selector to match the label in the node specification.
  • After you save the CR, the current Kibana pod is terminated and new pod is deployed:

    1. $ oc get pods

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m
    3. elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m
    4. elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m
    5. elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m
    6. fluentd-42dzz 1/1 Running 0 28m
    7. fluentd-d74rq 1/1 Running 0 28m
    8. fluentd-m5vr9 1/1 Running 0 28m
    9. fluentd-nkxl7 1/1 Running 0 28m
    10. fluentd-pdvqb 1/1 Running 0 28m
    11. fluentd-tflh6 1/1 Running 0 28m
    12. kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s
    13. kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s
  • The new pod is on the ip-10-0-139-48.us-east-2.compute.internal node:

    1. $ oc get pod kibana-7d85dcffc8-bfpfp -o wide

    Example output

    1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    2. kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>
  • After a few moments, the original Kibana pod is removed.

    1. $ oc get pods

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m
    3. elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m
    4. elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m
    5. elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m
    6. fluentd-42dzz 1/1 Running 0 29m
    7. fluentd-d74rq 1/1 Running 0 29m
    8. fluentd-m5vr9 1/1 Running 0 29m
    9. fluentd-nkxl7 1/1 Running 0 29m
    10. fluentd-pdvqb 1/1 Running 0 29m
    11. fluentd-tflh6 1/1 Running 0 29m
    12. kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s