Common configuration options

Metering is a deprecated feature. Deprecated functionality is still included in OKD and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.

For the most recent list of major functionality that has been deprecated or removed within OKD, refer to the Deprecated and removed features section of the OKD release notes.

Resource requests and limits

You can adjust the CPU, memory, or storage resource requests and/or limits for pods and volumes. The default-resource-limits.yaml below provides an example of setting resource request and limits for each component.

  1. apiVersion: metering.openshift.io/v1
  2. kind: MeteringConfig
  3. metadata:
  4. name: "operator-metering"
  5. spec:
  6. reporting-operator:
  7. spec:
  8. resources:
  9. limits:
  10. cpu: 1
  11. memory: 500Mi
  12. requests:
  13. cpu: 500m
  14. memory: 100Mi
  15. presto:
  16. spec:
  17. coordinator:
  18. resources:
  19. limits:
  20. cpu: 4
  21. memory: 4Gi
  22. requests:
  23. cpu: 2
  24. memory: 2Gi
  25. worker:
  26. replicas: 0
  27. resources:
  28. limits:
  29. cpu: 8
  30. memory: 8Gi
  31. requests:
  32. cpu: 4
  33. memory: 2Gi
  34. hive:
  35. spec:
  36. metastore:
  37. resources:
  38. limits:
  39. cpu: 4
  40. memory: 2Gi
  41. requests:
  42. cpu: 500m
  43. memory: 650Mi
  44. storage:
  45. class: null
  46. create: true
  47. size: 5Gi
  48. server:
  49. resources:
  50. limits:
  51. cpu: 1
  52. memory: 1Gi
  53. requests:
  54. cpu: 500m
  55. memory: 500Mi

Node selectors

You can run the metering components on specific sets of nodes. Set the nodeSelector on a metering component to control where the component is scheduled. The node-selectors.yaml file below provides an example of setting node selectors for each component.

Add the openshift.io/node-selector: “” namespace annotation to the metering namespace YAML file before configuring specific node selectors for the operand pods. Specify “” as the annotation value.

  1. apiVersion: metering.openshift.io/v1
  2. kind: MeteringConfig
  3. metadata:
  4. name: "operator-metering"
  5. spec:
  6. reporting-operator:
  7. spec:
  8. nodeSelector:
  9. "node-role.kubernetes.io/infra": "" (1)
  10. presto:
  11. spec:
  12. coordinator:
  13. nodeSelector:
  14. "node-role.kubernetes.io/infra": "" (1)
  15. worker:
  16. nodeSelector:
  17. "node-role.kubernetes.io/infra": "" (1)
  18. hive:
  19. spec:
  20. metastore:
  21. nodeSelector:
  22. "node-role.kubernetes.io/infra": "" (1)
  23. server:
  24. nodeSelector:
  25. "node-role.kubernetes.io/infra": "" (1)
1Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use key-value pairs, based on the value specified for the node.

Add the openshift.io/node-selector: “” namespace annotation to the metering namespace YAML file before configuring specific node selectors for the operand pods. When the openshift.io/node-selector annotation is set on the project, the value is used in preference to the value of the spec.defaultNodeSelector field in the cluster-wide Scheduler object.

Verification

You can verify the metering node selectors by performing any of the following checks:

  • Verify that all pods for metering are correctly scheduled on the IP of the node that is configured in the MeteringConfig custom resource:

    1. Check all pods in the openshift-metering namespace:

      1. $ oc --namespace openshift-metering get pods -o wide

      The output shows the NODE and corresponding IP for each pod running in the openshift-metering namespace.

      Example output

      1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
      2. hive-metastore-0 1/2 Running 0 4m33s 10.129.2.26 ip-10-0-210-167.us-east-2.compute.internal <none> <none>
      3. hive-server-0 2/3 Running 0 4m21s 10.128.2.26 ip-10-0-150-175.us-east-2.compute.internal <none> <none>
      4. metering-operator-964b4fb55-4p699 2/2 Running 0 7h30m 10.131.0.33 ip-10-0-189-6.us-east-2.compute.internal <none> <none>
      5. nfs-server 1/1 Running 0 7h30m 10.129.2.24 ip-10-0-210-167.us-east-2.compute.internal <none> <none>
      6. presto-coordinator-0 2/2 Running 0 4m8s 10.131.0.35 ip-10-0-189-6.us-east-2.compute.internal <none> <none>
      7. reporting-operator-869b854c78-8g2x5 1/2 Running 0 7h27m 10.128.2.25 ip-10-0-150-175.us-east-2.compute.internal <none> <none>
    2. Compare the nodes in the openshift-metering namespace to each node NAME in your cluster:

      1. $ oc get nodes

      Example output

      1. NAME STATUS ROLES AGE VERSION
      2. ip-10-0-147-106.us-east-2.compute.internal Ready master 14h v1.19.0+6025c28
      3. ip-10-0-150-175.us-east-2.compute.internal Ready worker 14h v1.19.0+6025c28
      4. ip-10-0-175-23.us-east-2.compute.internal Ready master 14h v1.19.0+6025c28
      5. ip-10-0-189-6.us-east-2.compute.internal Ready worker 14h v1.19.0+6025c28
      6. ip-10-0-205-158.us-east-2.compute.internal Ready master 14h v1.19.0+6025c28
      7. ip-10-0-210-167.us-east-2.compute.internal Ready worker 14h v1.19.0+6025c28
  • Verify that the node selector configuration in the MeteringConfig custom resource does not interfere with the cluster-wide node selector configuration such that no metering operand pods are scheduled.

    • Check the cluster-wide Scheduler object for the spec.defaultNodeSelector field, which shows where pods are scheduled by default:

      1. $ oc get schedulers.config.openshift.io cluster -o yaml