Moving OpenShift Logging resources with node selectors

You can use node selectors to deploy the Elasticsearch and Kibana pods to different nodes.

Moving OpenShift Logging resources

You can configure the Cluster Logging Operator to deploy the pods for OpenShift Logging components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Cluster Logging Operator pod from its installed location.

For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.

Prerequisites

  • OpenShift Logging and Elasticsearch must be installed. These features are not installed by default.

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    1. $ oc edit ClusterLogging instance
    1. apiVersion: logging.openshift.io/v1
    2. kind: ClusterLogging
    3. ...
    4. spec:
    5. collection:
    6. logs:
    7. fluentd:
    8. resources: null
    9. type: fluentd
    10. logStore:
    11. elasticsearch:
    12. nodeCount: 3
    13. nodeSelector: (1)
    14. node-role.kubernetes.io/infra: ''
    15. redundancyPolicy: SingleRedundancy
    16. resources:
    17. limits:
    18. cpu: 500m
    19. memory: 16Gi
    20. requests:
    21. cpu: 500m
    22. memory: 16Gi
    23. storage: {}
    24. type: elasticsearch
    25. managementState: Managed
    26. visualization:
    27. kibana:
    28. nodeSelector: (1)
    29. node-role.kubernetes.io/infra: ''
    30. proxy:
    31. resources: null
    32. replicas: 1
    33. resources: null
    34. type: kibana
    35. ...
1Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node.

Verification

To verify that a component has moved, you can use the oc get pod -o wide command.

For example:

  • You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal node:

    1. $ oc get pod kibana-5b8bdf44f9-ccpq9 -o wide

    Example output

    1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    2. kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>
  • You want to move the Kibana pod to the ip-10-0-139-48.us-east-2.compute.internal node, a dedicated infrastructure node:

    1. $ oc get nodes

    Example output

    1. NAME STATUS ROLES AGE VERSION
    2. ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.21.0
    3. ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.21.0
    4. ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.21.0
    5. ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.21.0
    6. ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.21.0
    7. ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.21.0
    8. ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.21.0

    Note that the node has a node-role.kubernetes.io/infra: '' label:

    1. $ oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml

    Example output

    1. kind: Node
    2. apiVersion: v1
    3. metadata:
    4. name: ip-10-0-139-48.us-east-2.compute.internal
    5. selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal
    6. uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751
    7. resourceVersion: '39083'
    8. creationTimestamp: '2020-04-13T19:07:55Z'
    9. labels:
    10. node-role.kubernetes.io/infra: ''
    11. ...
  • To move the Kibana pod, edit the ClusterLogging CR to add a node selector:

    1. apiVersion: logging.openshift.io/v1
    2. kind: ClusterLogging
    3. ...
    4. spec:
    5. ...
    6. visualization:
    7. kibana:
    8. nodeSelector: (1)
    9. node-role.kubernetes.io/infra: ''
    10. proxy:
    11. resources: null
    12. replicas: 1
    13. resources: null
    14. type: kibana
    1Add a node selector to match the label in the node specification.
  • After you save the CR, the current Kibana pod is terminated and new pod is deployed:

    1. $ oc get pods

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m
    3. elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m
    4. elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m
    5. elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m
    6. fluentd-42dzz 1/1 Running 0 28m
    7. fluentd-d74rq 1/1 Running 0 28m
    8. fluentd-m5vr9 1/1 Running 0 28m
    9. fluentd-nkxl7 1/1 Running 0 28m
    10. fluentd-pdvqb 1/1 Running 0 28m
    11. fluentd-tflh6 1/1 Running 0 28m
    12. kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s
    13. kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s
  • The new pod is on the ip-10-0-139-48.us-east-2.compute.internal node:

    1. $ oc get pod kibana-7d85dcffc8-bfpfp -o wide

    Example output

    1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    2. kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>
  • After a few moments, the original Kibana pod is removed.

    1. $ oc get pods

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m
    3. elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m
    4. elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m
    5. elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m
    6. fluentd-42dzz 1/1 Running 0 29m
    7. fluentd-d74rq 1/1 Running 0 29m
    8. fluentd-m5vr9 1/1 Running 0 29m
    9. fluentd-nkxl7 1/1 Running 0 29m
    10. fluentd-pdvqb 1/1 Running 0 29m
    11. fluentd-tflh6 1/1 Running 0 29m
    12. kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s