Moving logging subsystem resources with node selectors

You can use node selectors to deploy the Elasticsearch and Kibana pods to different nodes.

Moving logging subsystem resources

You can configure the Red Hat OpenShift Logging Operator to deploy the pods for logging subsystem components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Red Hat OpenShift Logging Operator pod from its installed location.

For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.

Prerequisites

  • You have installed the Red Hat OpenShift Logging Operator and the OpenShift Elasticsearch Operator.

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    1. $ oc edit ClusterLogging instance

    Example ClusterLogging CR

    1. apiVersion: logging.openshift.io/v1
    2. kind: ClusterLogging
    3. # ...
    4. spec:
    5. logStore:
    6. elasticsearch:
    7. nodeCount: 3
    8. nodeSelector: (1)
    9. node-role.kubernetes.io/infra: ''
    10. tolerations:
    11. - effect: NoSchedule
    12. key: node-role.kubernetes.io/infra
    13. value: reserved
    14. - effect: NoExecute
    15. key: node-role.kubernetes.io/infra
    16. value: reserved
    17. redundancyPolicy: SingleRedundancy
    18. resources:
    19. limits:
    20. cpu: 500m
    21. memory: 16Gi
    22. requests:
    23. cpu: 500m
    24. memory: 16Gi
    25. storage: {}
    26. type: elasticsearch
    27. managementState: Managed
    28. visualization:
    29. kibana:
    30. nodeSelector: (1)
    31. node-role.kubernetes.io/infra: ''
    32. tolerations:
    33. - effect: NoSchedule
    34. key: node-role.kubernetes.io/infra
    35. value: reserved
    36. - effect: NoExecute
    37. key: node-role.kubernetes.io/infra
    38. value: reserved
    39. proxy:
    40. resources: null
    41. replicas: 1
    42. resources: null
    43. type: kibana
    44. # ...
    1Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.

Verification

To verify that a component has moved, you can use the oc get pod -o wide command.

For example:

  • You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal node:

    1. $ oc get pod kibana-5b8bdf44f9-ccpq9 -o wide

    Example output

    1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    2. kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>
  • You want to move the Kibana pod to the ip-10-0-139-48.us-east-2.compute.internal node, a dedicated infrastructure node:

    1. $ oc get nodes

    Example output

    1. NAME STATUS ROLES AGE VERSION
    2. ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.28.5
    3. ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.28.5
    4. ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.28.5
    5. ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.28.5
    6. ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.28.5
    7. ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.28.5
    8. ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.28.5

    Note that the node has a node-role.kubernetes.io/infra: '' label:

    1. $ oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml

    Example output

    1. kind: Node
    2. apiVersion: v1
    3. metadata:
    4. name: ip-10-0-139-48.us-east-2.compute.internal
    5. selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal
    6. uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751
    7. resourceVersion: '39083'
    8. creationTimestamp: '2020-04-13T19:07:55Z'
    9. labels:
    10. node-role.kubernetes.io/infra: ''
    11. ...
  • To move the Kibana pod, edit the ClusterLogging CR to add a node selector:

    1. apiVersion: logging.openshift.io/v1
    2. kind: ClusterLogging
    3. # ...
    4. spec:
    5. # ...
    6. visualization:
    7. kibana:
    8. nodeSelector: (1)
    9. node-role.kubernetes.io/infra: ''
    10. proxy:
    11. resources: null
    12. replicas: 1
    13. resources: null
    14. type: kibana
    1Add a node selector to match the label in the node specification.
  • After you save the CR, the current Kibana pod is terminated and new pod is deployed:

    1. $ oc get pods

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m
    3. elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m
    4. elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m
    5. elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m
    6. collector-42dzz 1/1 Running 0 28m
    7. collector-d74rq 1/1 Running 0 28m
    8. collector-m5vr9 1/1 Running 0 28m
    9. collector-nkxl7 1/1 Running 0 28m
    10. collector-pdvqb 1/1 Running 0 28m
    11. collector-tflh6 1/1 Running 0 28m
    12. kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s
    13. kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s
  • The new pod is on the ip-10-0-139-48.us-east-2.compute.internal node:

    1. $ oc get pod kibana-7d85dcffc8-bfpfp -o wide

    Example output

    1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    2. kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>
  • After a few moments, the original Kibana pod is removed.

    1. $ oc get pods

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m
    3. elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m
    4. elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m
    5. elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m
    6. collector-42dzz 1/1 Running 0 29m
    7. collector-d74rq 1/1 Running 0 29m
    8. collector-m5vr9 1/1 Running 0 29m
    9. collector-nkxl7 1/1 Running 0 29m
    10. collector-pdvqb 1/1 Running 0 29m
    11. collector-tflh6 1/1 Running 0 29m
    12. kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s