Configuring the monitoring stack

The OKD 4 installation program provides only a low number of configuration options before installation. Configuring most OKD framework components, including the cluster monitoring stack, happens postinstallation.

This section explains what configuration is supported, shows how to configure the monitoring stack, and demonstrates several common configuration scenarios.

Prerequisites

  • The monitoring stack imposes additional resource requirements. Consult the computing resources recommendations in Scaling the Cluster Monitoring Operator and verify that you have sufficient resources.

Maintenance and support for monitoring

The supported way of configuring OKD Monitoring is by configuring it using the options described in this document. Do not use other configurations, as they are unsupported. Configuration paradigms might change across Prometheus releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this section, your changes will disappear because the cluster-monitoring-operator reconciles any differences. The Operator resets everything to the defined state by default and by design.

Support considerations for monitoring

The following modifications are explicitly not supported:

  • Creating additional ServiceMonitor, PodMonitor, and PrometheusRule objects in the openshift-* and kube-* projects.

  • Modifying any resources or objects deployed in the openshift-monitoring or openshift-user-workload-monitoring projects. The resources created by the OKD monitoring stack are not meant to be used by any other resources, as there are no guarantees about their backward compatibility.

    The Alertmanager configuration is deployed as a secret resource in the openshift-monitoring namespace. If you have enabled a separate Alertmanager instance for user-defined alert routing, an Alertmanager configuration is also deployed as a secret resource in the openshift-user-workload-monitoring namespace. To configure additional routes for any instance of Alertmanager, you need to decode, modify, and then encode that secret. This procedure is a supported exception to the preceding statement.

  • Modifying resources of the stack. The OKD monitoring stack ensures its resources are always in the state it expects them to be. If they are modified, the stack will reset them.

  • Deploying user-defined workloads to openshift-*, and kube-* projects. These projects are reserved for Red Hat provided components and they should not be used for user-defined workloads.

  • Enabling symptom based monitoring by using the Probe custom resource definition (CRD) in Prometheus Operator.

Backward compatibility for metrics, recording rules, or alerting rules is not guaranteed.

  • Installing custom Prometheus instances on OKD. A custom instance is a Prometheus custom resource (CR) managed by the Prometheus Operator.

Support policy for monitoring Operators

Monitoring Operators ensure that OKD monitoring resources function as designed and tested. If Cluster Version Operator (CVO) control of an Operator is overridden, the Operator does not respond to configuration changes, reconcile the intended state of cluster objects, or receive updates.

While overriding CVO control for an Operator can be helpful during debugging, this is unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades.

Overriding the Cluster Version Operator

The spec.overrides parameter can be added to the configuration for the CVO to allow administrators to provide a list of overrides to the behavior of the CVO for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set:

  1. Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.

Setting a CVO override puts the entire cluster in an unsupported state and prevents the monitoring stack from being reconciled to its intended state. This impacts the reliability features built into Operators and prevents updates from being received. Reported issues must be reproduced after removing any overrides for support to proceed.

Preparing to configure the monitoring stack

You can configure the monitoring stack by creating and updating monitoring config maps.

Creating a cluster monitoring config map

To configure core OKD monitoring components, you must create the cluster-monitoring-config ConfigMap object in the openshift-monitoring project.

When you save your changes to the cluster-monitoring-config ConfigMap object, some or all of the pods in the openshift-monitoring project might be redeployed. It can sometimes take a while for these components to redeploy.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Check whether the cluster-monitoring-config ConfigMap object exists:

    1. $ oc -n openshift-monitoring get configmap cluster-monitoring-config
  2. If the ConfigMap object does not exist:

    1. Create the following YAML manifest. In this example the file is called cluster-monitoring-config.yaml:

      1. apiVersion: v1
      2. kind: ConfigMap
      3. metadata:
      4. name: cluster-monitoring-config
      5. namespace: openshift-monitoring
      6. data:
      7. config.yaml: |
    2. Apply the configuration to create the ConfigMap object:

      1. $ oc apply -f cluster-monitoring-config.yaml

Creating a user-defined workload monitoring config map

To configure the components that monitor user-defined projects, you must create the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project.

When you save your changes to the user-workload-monitoring-config ConfigMap object, some or all of the pods in the openshift-user-workload-monitoring project might be redeployed. It can sometimes take a while for these components to redeploy. You can create and configure the config map before you first enable monitoring for user-defined projects, to prevent having to redeploy the pods often.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Check whether the user-workload-monitoring-config ConfigMap object exists:

    1. $ oc -n openshift-user-workload-monitoring get configmap user-workload-monitoring-config
  2. If the user-workload-monitoring-config ConfigMap object does not exist:

    1. Create the following YAML manifest. In this example the file is called user-workload-monitoring-config.yaml:

      1. apiVersion: v1
      2. kind: ConfigMap
      3. metadata:
      4. name: user-workload-monitoring-config
      5. namespace: openshift-user-workload-monitoring
      6. data:
      7. config.yaml: |
    2. Apply the configuration to create the ConfigMap object:

      1. $ oc apply -f user-workload-monitoring-config.yaml

      Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects.

Additional resources

Configuring the monitoring stack

In OKD 4.14, you can configure the monitoring stack using the cluster-monitoring-config or user-workload-monitoring-config ConfigMap objects. Config maps configure the Cluster Monitoring Operator (CMO), which in turn configures the components of the stack.

Prerequisites

  • If you are configuring core OKD monitoring components:

    • You have access to the cluster as a user with the cluster-admin cluster role.

    • You have created the cluster-monitoring-config ConfigMap object.

  • If you are configuring components that monitor user-defined projects:

    • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

    • You have created the user-workload-monitoring-config ConfigMap object.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the ConfigMap object.

    • To configure core OKD monitoring components:

      1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

        1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
      2. Add your configuration under data/config.yaml as a key-value pair <component_name>: <component_configuration>:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. <component>:
        9. <configuration_for_the_component>

        Substitute <component> and <configuration_for_the_component> accordingly.

        The following example ConfigMap object configures a persistent volume claim (PVC) for Prometheus. This relates to the Prometheus instance that monitors core OKD components only:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. prometheusK8s: (1)
        9. volumeClaimTemplate:
        10. spec:
        11. storageClassName: fast
        12. volumeMode: Filesystem
        13. resources:
        14. requests:
        15. storage: 40Gi
        1Defines the Prometheus component and the subsequent lines define its configuration.
    • To configure components that monitor user-defined projects:

      1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

        1. $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
      2. Add your configuration under data/config.yaml as a key-value pair <component_name>: <component_configuration>:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. <component>:
        9. <configuration_for_the_component>

        Substitute <component> and <configuration_for_the_component> accordingly.

        The following example ConfigMap object configures a data retention period and minimum container resource requests for Prometheus. This relates to the Prometheus instance that monitors user-defined projects only:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. prometheus: (1)
        9. retention: 24h (2)
        10. resources:
        11. requests:
        12. cpu: 200m (3)
        13. memory: 2Gi (4)
        1Defines the Prometheus component and the subsequent lines define its configuration.
        2Configures a twenty-four hour data retention period for the Prometheus instance that monitors user-defined projects.
        3Defines a minimum resource request of 200 millicores for the Prometheus container.
        4Defines a minimum pod resource request of 2 GiB of memory for the Prometheus container.

        The Prometheus config map component is called prometheusK8s in the cluster-monitoring-config ConfigMap object and prometheus in the user-workload-monitoring-config ConfigMap object.

  1. Save the file to apply the changes to the ConfigMap object. The pods affected by the new configuration are restarted automatically.

    Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects.

    When changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.

Additional resources

Configurable monitoring components

This table shows the monitoring components you can configure and the keys used to specify the components in the cluster-monitoring-config and user-workload-monitoring-config ConfigMap objects.

Table 1. Configurable monitoring components
Componentcluster-monitoring-config config map keyuser-workload-monitoring-config config map key

Prometheus Operator

prometheusOperator

prometheusOperator

Prometheus

prometheusK8s

prometheus

Alertmanager

alertmanagerMain

alertmanager

kube-state-metrics

kubeStateMetrics

openshift-state-metrics

openshiftStateMetrics

Telemeter Client

telemeterClient

Prometheus Adapter

k8sPrometheusAdapter

Thanos Querier

thanosQuerier

Thanos Ruler

thanosRuler

The Prometheus key is called prometheusK8s in the cluster-monitoring-config ConfigMap object and prometheus in the user-workload-monitoring-config ConfigMap object.

Using node selectors to move monitoring components

By using the nodeSelector constraint with labeled nodes, you can move any of the monitoring stack components to specific nodes. By doing so, you can control the placement and distribution of the monitoring components across a cluster.

By controlling placement and distribution of monitoring components, you can optimize system resource use, improve performance, and segregate workloads based on specific requirements or policies.

How node selectors work with other constraints

If you move monitoring components by using node selector constraints, be aware that other constraints to control pod scheduling might exist for a cluster:

  • Topology spread constraints might be in place to control pod placement.

  • Hard anti-affinity rules are in place for Prometheus, Thanos Querier, Alertmanager, and other monitoring components to ensure that multiple pods for these components are always spread across different nodes and are therefore always highly available.

When scheduling pods onto nodes, the pod scheduler tries to satisfy all existing constraints when determining pod placement. That is, all constraints compound when the pod scheduler determines which pods will be placed on which nodes.

Therefore, if you configure a node selector constraint but existing constraints cannot all be satisfied, the pod scheduler cannot match all constraints and will not schedule a pod for placement onto a node.

To maintain resilience and high availability for monitoring components, ensure that enough nodes are available and match all constraints when you configure a node selector constraint to move a component.

Additional resources

Moving monitoring components to different nodes

To specify the nodes in your cluster on which monitoring stack components will run, configure the nodeSelector constraint in the component’s ConfigMap object to match labels assigned to the nodes.

You cannot add a node selector constraint directly to an existing scheduled pod.

Prerequisites

  • If you are configuring core OKD monitoring components:

    • You have access to the cluster as a user with the cluster-admin cluster role.

    • You have created the cluster-monitoring-config ConfigMap object.

  • If you are configuring components that monitor user-defined projects:

    • You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

    • You have created the user-workload-monitoring-config ConfigMap object.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. If you have not done so yet, add a label to the nodes on which you want to run the monitoring components:

    1. $ oc label nodes <node-name> <node-label>
  2. Edit the ConfigMap object:

    • To move a component that monitors core OKD projects:

      1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

        1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
      2. Specify the node labels for the nodeSelector constraint for the component under data/config.yaml:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. <component>: (1)
        9. nodeSelector:
        10. <node-label-1> (2)
        11. <node-label-2> (3)
        12. <...>
        1Substitute <component> with the appropriate monitoring stack component name.
        2Substitute <node-label-1> with the label you added to the node.
        3Optional: Specify additional labels. If you specify additional labels, the pods for the component are only scheduled on the nodes that contain all of the specified labels.

        If monitoring components remain in a Pending state after configuring the nodeSelector constraint, check the pod events for errors relating to taints and tolerations.

    • To move a component that monitors user-defined projects:

      1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

        1. $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
      2. Specify the node labels for the nodeSelector constraint for the component under data/config.yaml:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. <component>: (1)
        9. nodeSelector:
        10. <node-label-1> (2)
        11. <node-label-2> (3)
        12. <...>
        1Substitute <component> with the appropriate monitoring stack component name.
        2Substitute <node-label-1> with the label you added to the node.
        3Optional: Specify additional labels. If you specify additional labels, the pods for the component are only scheduled on the nodes that contain all of the specified labels.

        If monitoring components remain in a Pending state after configuring the nodeSelector constraint, check the pod events for errors relating to taints and tolerations.

  1. Save the file to apply the changes. The components specified in the new configuration are moved to the new nodes automatically.

    Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects.

    When you save changes to a monitoring config map, the pods and other resources in the project might be redeployed. The running monitoring processes in that project might also restart.

Additional resources

Assigning tolerations to monitoring components

You can assign tolerations to any of the monitoring stack components to enable moving them to tainted nodes.

Prerequisites

  • If you are configuring core OKD monitoring components:

    • You have access to the cluster as a user with the cluster-admin cluster role.

    • You have created the cluster-monitoring-config ConfigMap object.

  • If you are configuring components that monitor user-defined projects:

    • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

    • You have created the user-workload-monitoring-config ConfigMap object.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the ConfigMap object:

    • To assign tolerations to a component that monitors core OKD projects:

      1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

        1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
      2. Specify tolerations for the component:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. <component>:
        9. tolerations:
        10. <toleration_specification>

        Substitute <component> and <toleration_specification> accordingly.

        For example, oc adm taint nodes node1 key1=value1:NoSchedule adds a taint to node1 with the key key1 and the value value1. This prevents monitoring components from deploying pods on node1 unless a toleration is configured for that taint. The following example configures the alertmanagerMain component to tolerate the example taint:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. alertmanagerMain:
        9. tolerations:
        10. - key: "key1"
        11. operator: "Equal"
        12. value: "value1"
        13. effect: "NoSchedule"
    • To assign tolerations to a component that monitors user-defined projects:

      1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

        1. $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
      2. Specify tolerations for the component:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. <component>:
        9. tolerations:
        10. <toleration_specification>

        Substitute <component> and <toleration_specification> accordingly.

        For example, oc adm taint nodes node1 key1=value1:NoSchedule adds a taint to node1 with the key key1 and the value value1. This prevents monitoring components from deploying pods on node1 unless a toleration is configured for that taint. The following example configures the thanosRuler component to tolerate the example taint:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. thanosRuler:
        9. tolerations:
        10. - key: "key1"
        11. operator: "Equal"
        12. value: "value1"
        13. effect: "NoSchedule"
  1. Save the file to apply the changes. The new component placement configuration is applied automatically.

    Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects.

    When changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.

Additional resources

Setting the body size limit for metrics scraping

By default, no limit exists for the uncompressed body size for data returned from scraped metrics targets. You can set a body size limit to help avoid situations in which Prometheus consumes excessive amounts of memory when scraped targets return a response that contains a large amount of data. In addition, by setting a body size limit, you can reduce the impact that a malicious target might have on Prometheus and on the cluster as a whole.

After you set a value for enforcedBodySizeLimit, the alert PrometheusScrapeBodySizeLimitHit fires when at least one Prometheus scrape target replies with a response body larger than the configured value.

If metrics data scraped from a target has an uncompressed body size exceeding the configured size limit, the scrape fails. Prometheus then considers this target to be down and sets its up metric value to 0, which can trigger the TargetDown alert.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace:

    1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add a value for enforcedBodySizeLimit to data/config.yaml/prometheusK8s to limit the body size that can be accepted per target scrape:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: cluster-monitoring-config
    5. namespace: openshift-monitoring
    6. data:
    7. config.yaml: |-
    8. prometheusK8s:
    9. enforcedBodySizeLimit: 40MB (1)
    1Specify the maximum body size for scraped metrics targets. This enforcedBodySizeLimit example limits the uncompressed size per target scrape to 40 megabytes. Valid numeric values use the Prometheus data size format: B (bytes), KB (kilobytes), MB (megabytes), GB (gigabytes), TB (terabytes), PB (petabytes), and EB (exabytes). The default value is 0, which specifies no limit. You can also set the value to automatic to calculate the limit automatically based on cluster capacity.
  3. Save the file to apply the changes automatically.

    When you save changes to a cluster-monitoring-config config map, the pods and other resources in the openshift-monitoring project might be redeployed. The running monitoring processes in that project might also restart.

Additional resources

Configuring a dedicated service monitor

You can configure OKD core platform monitoring to use dedicated service monitors to collect metrics for the resource metrics pipeline.

When enabled, a dedicated service monitor exposes two additional metrics from the kubelet endpoint and sets the value of the honorTimestamps field to true.

By enabling a dedicated service monitor, you can improve the consistency of Prometheus Adapter-based CPU usage measurements used by, for example, the oc adm top pod command or the Horizontal Pod Autoscaler.

Enabling a dedicated service monitor

You can configure core platform monitoring to use a dedicated service monitor by configuring the dedicatedServiceMonitors key in the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace.

Prerequisites

  • You have installed the OpenShift CLI (oc).

  • You have access to the cluster as a user with the cluster-admin cluster role.

  • You have created the cluster-monitoring-config ConfigMap object.

Procedure

  1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace:

    1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add an enabled: true key-value pair as shown in the following sample:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: cluster-monitoring-config
    5. namespace: openshift-monitoring
    6. data:
    7. config.yaml: |
    8. k8sPrometheusAdapter:
    9. dedicatedServiceMonitors:
    10. enabled: true (1)
    1Set the value of the enabled field to true to deploy a dedicated service monitor that exposes the kubelet /metrics/resource endpoint.
  3. Save the file to apply the changes automatically.

    When you save changes to a cluster-monitoring-config config map, the pods and other resources in the openshift-monitoring project might be redeployed. The running monitoring processes in that project might also restart.

Configuring persistent storage

Running cluster monitoring with persistent storage means that your metrics are stored to a persistent volume (PV) and can survive a pod being restarted or recreated. This is ideal if you require your metrics or alerting data to be guarded from data loss. For production environments, it is highly recommended to configure persistent storage. Because of the high IO demands, it is advantageous to use local storage.

Persistent storage prerequisites

  • Dedicate sufficient local persistent storage to ensure that the disk does not become full. How much storage you need depends on the number of pods.

  • Verify that you have a persistent volume (PV) ready to be claimed by the persistent volume claim (PVC), one PV for each replica. Because Prometheus and Alertmanager both have two replicas, you need four PVs to support the entire monitoring stack. The PVs are available from the Local Storage Operator, but not if you have enabled dynamically provisioned storage.

  • Use Filesystem as the storage type value for the volumeMode parameter when you configure the persistent volume.

    If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: Block in the LocalVolume object. Prometheus cannot use raw block volumes.

    Prometheus does not support file systems that are not POSIX compliant. For example, some NFS file system implementations are not POSIX compliant. If you want to use an NFS file system for storage, verify with the vendor that their NFS implementation is fully POSIX compliant.

Configuring a local persistent volume claim

For monitoring components to use a persistent volume (PV), you must configure a persistent volume claim (PVC).

Prerequisites

  • If you are configuring core OKD monitoring components:

    • You have access to the cluster as a user with the cluster-admin cluster role.

    • You have created the cluster-monitoring-config ConfigMap object.

  • If you are configuring components that monitor user-defined projects:

    • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

    • You have created the user-workload-monitoring-config ConfigMap object.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the ConfigMap object:

    • To configure a PVC for a component that monitors core OKD projects:

      1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

        1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
      2. Add your PVC configuration for the component under data/config.yaml:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. <component>:
        9. volumeClaimTemplate:
        10. spec:
        11. storageClassName: <storage_class>
        12. resources:
        13. requests:
        14. storage: <amount_of_storage>

        See the Kubernetes documentation on PersistentVolumeClaims for information on how to specify volumeClaimTemplate.

        The following example configures a PVC that claims local persistent storage for the Prometheus instance that monitors core OKD components:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. prometheusK8s:
        9. volumeClaimTemplate:
        10. spec:
        11. storageClassName: local-storage
        12. resources:
        13. requests:
        14. storage: 40Gi

        In the above example, the storage class created by the Local Storage Operator is called local-storage.

        The following example configures a PVC that claims local persistent storage for Alertmanager:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. alertmanagerMain:
        9. volumeClaimTemplate:
        10. spec:
        11. storageClassName: local-storage
        12. resources:
        13. requests:
        14. storage: 10Gi
    • To configure a PVC for a component that monitors user-defined projects:

      1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

        1. $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
      2. Add your PVC configuration for the component under data/config.yaml:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. <component>:
        9. volumeClaimTemplate:
        10. spec:
        11. storageClassName: <storage_class>
        12. resources:
        13. requests:
        14. storage: <amount_of_storage>

        See the Kubernetes documentation on PersistentVolumeClaims for information on how to specify volumeClaimTemplate.

        The following example configures a PVC that claims local persistent storage for the Prometheus instance that monitors user-defined projects:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. prometheus:
        9. volumeClaimTemplate:
        10. spec:
        11. storageClassName: local-storage
        12. resources:
        13. requests:
        14. storage: 40Gi

        In the above example, the storage class created by the Local Storage Operator is called local-storage.

        The following example configures a PVC that claims local persistent storage for Thanos Ruler:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. thanosRuler:
        9. volumeClaimTemplate:
        10. spec:
        11. storageClassName: local-storage
        12. resources:
        13. requests:
        14. storage: 10Gi

        Storage requirements for the thanosRuler component depend on the number of rules that are evaluated and how many samples each rule generates.

  1. Save the file to apply the changes. The pods affected by the new configuration are restarted automatically and the new storage configuration is applied.

    Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects.

Resizing a persistent storage volume

OKD does not support resizing an existing persistent storage volume used by StatefulSet resources, even if the underlying StorageClass resource used supports persistent volume sizing. Therefore, even if you update the storage field for an existing persistent volume claim (PVC) with a larger size, this setting will not be propagated to the associated persistent volume (PV).

However, resizing a PV is still possible by using a manual process. If you want to resize a PV for a monitoring component such as Prometheus, Thanos Ruler, or Alertmanager, you can update the appropriate config map in which the component is configured. Then, patch the PVC, and delete and orphan the pods. Orphaning the pods recreates the StatefulSet resource immediately and automatically updates the size of the volumes mounted in the pods with the new PVC settings. No service disruption occurs during this process.

Prerequisites

  • You have installed the OpenShift CLI (oc).

  • If you are configuring core OKD monitoring components:

    • You have access to the cluster as a user with the cluster-admin cluster role.

    • You have created the cluster-monitoring-config ConfigMap object.

    • You have configured at least one PVC for core OKD monitoring components.

  • If you are configuring components that monitor user-defined projects:

    • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

    • You have created the user-workload-monitoring-config ConfigMap object.

    • You have configured at least one PVC for components that monitor user-defined projects.

Procedure

  1. Edit the ConfigMap object:

    • To resize a PVC for a component that monitors core OKD projects:

      1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

        1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
      2. Add a new storage size for the PVC configuration for the component under data/config.yaml:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. <component>: (1)
        9. volumeClaimTemplate:
        10. spec:
        11. storageClassName: <storage_class> (2)
        12. resources:
        13. requests:
        14. storage: <amount_of_storage> (3)
        1Specify the core monitoring component.
        2Specify the storage class.
        3Specify the new size for the storage volume.

        The following example configures a PVC that sets the local persistent storage to 100 gigabytes for the Prometheus instance that monitors core OKD components:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. prometheusK8s:
        9. volumeClaimTemplate:
        10. spec:
        11. storageClassName: local-storage
        12. resources:
        13. requests:
        14. storage: 100Gi

        The following example configures a PVC that sets the local persistent storage for Alertmanager to 40 gigabytes:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. alertmanagerMain:
        9. volumeClaimTemplate:
        10. spec:
        11. storageClassName: local-storage
        12. resources:
        13. requests:
        14. storage: 40Gi
    • To resize a PVC for a component that monitors user-defined projects:

      You can resize the volumes for the Thanos Ruler and Prometheus instances that monitor user-defined projects.

      1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

        1. $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
      2. Update the PVC configuration for the monitoring component under data/config.yaml:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. <component>: (1)
        9. volumeClaimTemplate:
        10. spec:
        11. storageClassName: <storage_class> (2)
        12. resources:
        13. requests:
        14. storage: <amount_of_storage> (3)
        1Specify the core monitoring component.
        2Specify the storage class.
        3Specify the new size for the storage volume.

        The following example configures the PVC size to 100 gigabytes for the Prometheus instance that monitors user-defined projects:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. prometheus:
        9. volumeClaimTemplate:
        10. spec:
        11. storageClassName: local-storage
        12. resources:
        13. requests:
        14. storage: 100Gi

        The following example sets the PVC size to 20 gigabytes for Thanos Ruler:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. thanosRuler:
        9. volumeClaimTemplate:
        10. spec:
        11. storageClassName: local-storage
        12. resources:
        13. requests:
        14. storage: 20Gi

        Storage requirements for the thanosRuler component depend on the number of rules that are evaluated and how many samples each rule generates.

  1. Save the file to apply the changes. The pods affected by the new configuration restart automatically.

    When you save changes to a monitoring config map, the pods and other resources in the related project might be redeployed. The monitoring processes running in that project might also be restarted.

  2. Manually patch every PVC with the updated storage request. The following example resizes the storage size for the Prometheus component in the openshift-monitoring namespace to 100Gi:

    1. $ for p in $(oc -n openshift-monitoring get pvc -l app.kubernetes.io/name=prometheus -o jsonpath='{range .items[*]}{.metadata.name} {end}'); do \
    2. oc -n openshift-monitoring patch pvc/${p} --patch '{"spec": {"resources": {"requests": {"storage":"100Gi"}}}}'; \
    3. done
  3. Delete the underlying StatefulSet with the --cascade=orphan parameter:

    1. $ oc delete statefulset -l app.kubernetes.io/name=prometheus --cascade=orphan

Modifying the retention time and size for Prometheus metrics data

By default, Prometheus automatically retains metrics data for 11 days. You can modify the retention time for Prometheus to change how soon the data is deleted. You can also set the maximum amount of disk space the retained metrics data uses. If the data reaches this size limit, Prometheus deletes the oldest data first until the disk space used is again below the limit.

Note the following behaviors of these data retention settings:

  • The size-based retention policy applies to all data block directories in the /prometheus directory, including persistent blocks, write-ahead log (WAL) data, and m-mapped chunks.

  • Data in the /wal and /head_chunks directories counts toward the retention size limit, but Prometheus never purges data from these directories based on size- or time-based retention policies. Thus, if you set a retention size limit lower than the maximum size set for the /wal and /head_chunks directories, you have configured the system not to retain any data blocks in the /prometheus data directories.

  • The size-based retention policy is applied only when Prometheus cuts a new data block, which occurs every two hours after the WAL contains at least three hours of data.

  • If you do not explicitly define values for either retention or retentionSize, retention time defaults to 11 days, and retention size is not set.

  • If you define values for both retention and retentionSize, both values apply. If any data blocks exceed the defined retention time or the defined size limit, Prometheus purges these data blocks.

  • If you define a value for retentionSize and do not define retention, only the retentionSize value applies.

  • If you do not define a value for retentionSize and only define a value for retention, only the retention value applies.

Prerequisites

  • If you are configuring core OKD monitoring components:

    • You have access to the cluster as a user with the cluster-admin cluster role.

    • You have created the cluster-monitoring-config ConfigMap object.

  • If you are configuring components that monitor user-defined projects:

    • A cluster administrator has enabled monitoring for user-defined projects.

    • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

    • You have created the user-workload-monitoring-config ConfigMap object.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the ConfigMap object:

    • To modify the retention time and size for the Prometheus instance that monitors core OKD projects:

      1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

        1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
      2. Add the retention time and size configuration under data/config.yaml:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. prometheusK8s:
        9. retention: <time_specification> (1)
        10. retentionSize: <size_specification> (2)
        1The retention time: a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). You can also combine time values for specific times, such as 1h30m15s.
        2The retention size: a number directly followed by B (bytes), KB (kilobytes), MB (megabytes), GB (gigabytes), TB (terabytes), PB (petabytes), and EB (exabytes).

        The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance that monitors core OKD components:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. prometheusK8s:
        9. retention: 24h
        10. retentionSize: 10GB
    • To modify the retention time and size for the Prometheus instance that monitors user-defined projects:

      1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

        1. $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
      2. Add the retention time and size configuration under data/config.yaml:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. prometheus:
        9. retention: <time_specification> (1)
        10. retentionSize: <size_specification> (2)
        1The retention time: a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). You can also combine time values for specific times, such as 1h30m15s.
        2The retention size: a number directly followed by B (bytes), KB (kilobytes), MB (megabytes), GB (gigabytes), TB (terabytes), PB (petabytes), or EB (exabytes).

        The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance that monitors user-defined projects:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. prometheus:
        9. retention: 24h
        10. retentionSize: 10GB
  1. Save the file to apply the changes. The pods affected by the new configuration restart automatically.

    When changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.

Modifying the retention time for Thanos Ruler metrics data

By default, for user-defined projects, Thanos Ruler automatically retains metrics data for 24 hours. You can modify the retention time to change how long this data is retained by specifying a time value in the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

  • A cluster administrator has enabled monitoring for user-defined projects.

  • You have created the user-workload-monitoring-config ConfigMap object.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

    1. $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. Add the retention time configuration under data/config.yaml:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: user-workload-monitoring-config
    5. namespace: openshift-user-workload-monitoring
    6. data:
    7. config.yaml: |
    8. thanosRuler:
    9. retention: <time_specification> (1)
    1Specify the retention time in the following format: a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). You can also combine time values for specific times, such as 1h30m15s. The default is 24h.

    The following example sets the retention time to 10 days for Thanos Ruler data:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: user-workload-monitoring-config
    5. namespace: openshift-user-workload-monitoring
    6. data:
    7. config.yaml: |
    8. thanosRuler:
    9. retention: 10d
  3. Save the file to apply the changes. The pods affected by the new configuration automatically restart.

    Saving changes to a monitoring config map might restart monitoring processes and redeploy the pods and other resources in the related project. The running monitoring processes in that project might also restart.

Additional resources

Configuring remote write storage

You can configure remote write storage to enable Prometheus to send ingested metrics to remote systems for long-term storage. Doing so has no impact on how or for how long Prometheus stores metrics.

Prerequisites

  • If you are configuring core OKD monitoring components:

    • You have access to the cluster as a user with the cluster-admin cluster role.

    • You have created the cluster-monitoring-config ConfigMap object.

  • If you are configuring components that monitor user-defined projects:

    • You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

    • You have created the user-workload-monitoring-config ConfigMap object.

  • You have installed the OpenShift CLI (oc).

  • You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature.

  • You have set up authentication credentials in a Secret object for the remote write endpoint. You must create the secret in the same namespace as the Prometheus object for which you configure remote write: the openshift-monitoring namespace for default platform monitoring or the openshift-user-workload-monitoring namespace for user workload monitoring.

    To reduce security risks, use HTTPS and authentication to send metrics to an endpoint.

Procedure

  1. Edit the ConfigMap object:

    • To configure remote write for the Prometheus instance that monitors core OKD projects:

      1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

        1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
      2. Add a remoteWrite: section under data/config.yaml/prometheusK8s.

      3. Add an endpoint URL and authentication credentials in this section:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. prometheusK8s:
        9. remoteWrite:
        10. - url: "https://remote-write-endpoint.example.com" (1)
        11. <endpoint_authentication_credentials> (2)
        1The URL of the remote write endpoint.
        2The authentication method and credentials for the endpoint. Currently supported authentication methods are AWS Signature Version 4, authentication using HTTP in an Authorization request header, Basic authentication, OAuth 2.0, and TLS client. See Supported remote write authentication settings for sample configurations of supported authentication methods.
      4. Add write relabel configuration values after the authentication credentials:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. prometheusK8s:
        9. remoteWrite:
        10. - url: "https://remote-write-endpoint.example.com"
        11. <endpoint_authentication_credentials>
        12. <write_relabel_configs> (1)
        1The write relabel configuration settings.

        For <write_relabel_configs> substitute a list of write relabel configurations for metrics that you want to send to the remote endpoint.

        The following sample shows how to forward a single metric called my_metric:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. prometheusK8s:
        9. remoteWrite:
        10. - url: "https://remote-write-endpoint.example.com"
        11. writeRelabelConfigs:
        12. - sourceLabels: [__name__]
        13. regex: 'my_metric'
        14. action: keep

        See the Prometheus relabel_config documentation for information about write relabel configuration options.

    • To configure remote write for the Prometheus instance that monitors user-defined projects:

      1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

        1. $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
      2. Add a remoteWrite: section under data/config.yaml/prometheus.

      3. Add an endpoint URL and authentication credentials in this section:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. prometheus:
        9. remoteWrite:
        10. - url: "https://remote-write-endpoint.example.com" (1)
        11. <endpoint_authentication_credentials> (2)
        1The URL of the remote write endpoint.
        2The authentication method and credentials for the endpoint. Currently supported authentication methods are AWS Signature Version 4, authentication using HTTP an Authorization request header, basic authentication, OAuth 2.0, and TLS client. See Supported remote write authentication settings below for sample configurations of supported authentication methods.
      4. Add write relabel configuration values after the authentication credentials:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. prometheus:
        9. remoteWrite:
        10. - url: "https://remote-write-endpoint.example.com"
        11. <endpoint_authentication_credentials>
        12. <write_relabel_configs> (1)
        1The write relabel configuration settings.

        For <write_relabel_configs> substitute a list of write relabel configurations for metrics that you want to send to the remote endpoint.

        The following sample shows how to forward a single metric called my_metric:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. prometheus:
        9. remoteWrite:
        10. - url: "https://remote-write-endpoint.example.com"
        11. writeRelabelConfigs:
        12. - sourceLabels: [__name__]
        13. regex: 'my_metric'
        14. action: keep

        See the Prometheus relabel_config documentation for information about write relabel configuration options.

  1. Save the file to apply the changes. The pods affected by the new configuration restart automatically.

    Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects.

    Saving changes to a monitoring ConfigMap object might redeploy the pods and other resources in the related project. Saving changes might also restart the running monitoring processes in that project.

Supported remote write authentication settings

You can use different methods to authenticate with a remote write endpoint. Currently supported authentication methods are AWS Signature Version 4, basic authentication, authorization, OAuth 2.0, and TLS client. The following table provides details about supported authentication methods for use with remote write.

Authentication methodConfig map fieldDescription

AWS Signature Version 4

sigv4

This method uses AWS Signature Version 4 authentication to sign requests. You cannot use this method simultaneously with authorization, OAuth 2.0, or Basic authentication.

Basic authentication

basicAuth

Basic authentication sets the authorization header on every remote write request with the configured username and password.

authorization

authorization

Authorization sets the Authorization header on every remote write request using the configured token.

OAuth 2.0

oauth2

An OAuth 2.0 configuration uses the client credentials grant type. Prometheus fetches an access token from tokenUrl with the specified client ID and client secret to access the remote write endpoint. You cannot use this method simultaneously with authorization, AWS Signature Version 4, or Basic authentication.

TLS client

tlsConfig

A TLS client configuration specifies the CA certificate, the client certificate, and the client key file information used to authenticate with the remote write endpoint server using TLS. The sample configuration assumes that you have already created a CA certificate file, a client certificate file, and a client key file.

Example remote write authentication settings

The following samples show different authentication settings you can use to connect to a remote write endpoint. Each sample also shows how to configure a corresponding Secret object that contains authentication credentials and other relevant settings. Each sample configures authentication for use with default platform monitoring in the openshift-monitoring namespace.

Example 1. Sample YAML for AWS Signature Version 4 authentication

The following shows the settings for a sigv4 secret named sigv4-credentials in the openshift-monitoring namespace.

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: sigv4-credentials
  5. namespace: openshift-monitoring
  6. stringData:
  7. accessKey: <AWS_access_key> (1)
  8. secretKey: <AWS_secret_key> (2)
  9. type: Opaque
1The AWS API access key.
2The AWS API secret key.

The following shows sample AWS Signature Version 4 remote write authentication settings that use a Secret object named sigv4-credentials in the openshift-monitoring namespace:

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: cluster-monitoring-config
  5. namespace: openshift-monitoring
  6. data:
  7. config.yaml: |
  8. prometheusK8s:
  9. remoteWrite:
  10. - url: "https://authorization.example.com/api/write"
  11. sigv4:
  12. region: <AWS_region> (1)
  13. accessKey:
  14. name: sigv4-credentials (2)
  15. key: accessKey (3)
  16. secretKey:
  17. name: sigv4-credentials (2)
  18. key: secretKey (4)
  19. profile: <AWS_profile_name> (5)
  20. roleArn: <AWS_role_arn> (6)
1The AWS region.
2The name of the Secret object containing the AWS API access credentials.
3The key that contains the AWS API access key in the specified Secret object.
4The key that contains the AWS API secret key in the specified Secret object.
5The name of the AWS profile that is being used to authenticate.
6The unique identifier for the Amazon Resource Name (ARN) assigned to your role.

Example 2. Sample YAML for basic authentication

The following shows sample basic authentication settings for a Secret object named rw-basic-auth in the openshift-monitoring namespace:

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: rw-basic-auth
  5. namespace: openshift-monitoring
  6. stringData:
  7. user: <basic_username> (1)
  8. password: <basic_password> (2)
  9. type: Opaque
1The username.
2The password.

The following sample shows a basicAuth remote write configuration that uses a Secret object named rw-basic-auth in the openshift-monitoring namespace. It assumes that you have already set up authentication credentials for the endpoint.

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: cluster-monitoring-config
  5. namespace: openshift-monitoring
  6. data:
  7. config.yaml: |
  8. prometheusK8s:
  9. remoteWrite:
  10. - url: "https://basicauth.example.com/api/write"
  11. basicAuth:
  12. username:
  13. name: rw-basic-auth (1)
  14. key: user (2)
  15. password:
  16. name: rw-basic-auth (1)
  17. key: password (3)
1The name of the Secret object that contains the authentication credentials.
2The key that contains the username in the specified Secret object.
3The key that contains the password in the specified Secret object.

Example 3. Sample YAML for authentication with a bearer token using a Secret Object

The following shows bearer token settings for a Secret object named rw-bearer-auth in the openshift-monitoring namespace:

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: rw-bearer-auth
  5. namespace: openshift-monitoring
  6. stringData:
  7. token: <authentication_token> (1)
  8. type: Opaque
1The authentication token.

The following shows sample bearer token config map settings that use a Secret object named rw-bearer-auth in the openshift-monitoring namespace:

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: cluster-monitoring-config
  5. namespace: openshift-monitoring
  6. data:
  7. config.yaml: |
  8. enableUserWorkload: true
  9. prometheusK8s:
  10. remoteWrite:
  11. - url: "https://authorization.example.com/api/write"
  12. authorization:
  13. type: Bearer (1)
  14. credentials:
  15. name: rw-bearer-auth (2)
  16. key: token (3)
1The authentication type of the request. The default value is Bearer.
2The name of the Secret object that contains the authentication credentials.
3The key that contains the authentication token in the specified Secret object.

Example 4. Sample YAML for OAuth 2.0 authentication

The following shows sample OAuth 2.0 settings for a Secret object named oauth2-credentials in the openshift-monitoring namespace:

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: oauth2-credentials
  5. namespace: openshift-monitoring
  6. stringData:
  7. id: <oauth2_id> (1)
  8. secret: <oauth2_secret> (2)
  9. token: <oauth2_authentication_token> (3)
  10. type: Opaque
1The Oauth 2.0 ID.
2The OAuth 2.0 secret.
3The OAuth 2.0 token.

The following shows an oauth2 remote write authentication sample configuration that uses a Secret object named oauth2-credentials in the openshift-monitoring namespace:

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: cluster-monitoring-config
  5. namespace: openshift-monitoring
  6. data:
  7. config.yaml: |
  8. prometheusK8s:
  9. remoteWrite:
  10. - url: "https://test.example.com/api/write"
  11. oauth2:
  12. clientId:
  13. secret:
  14. name: oauth2-credentials (1)
  15. key: id (2)
  16. clientSecret:
  17. name: oauth2-credentials (1)
  18. key: secret (2)
  19. tokenUrl: https://example.com/oauth2/token (3)
  20. scopes: (4)
  21. - <scope_1>
  22. - <scope_2>
  23. endpointParams: (5)
  24. param1: <parameter_1>
  25. param2: <parameter_2>
1The name of the corresponding Secret object. Note that ClientId can alternatively refer to a ConfigMap object, although clientSecret must refer to a Secret object.
2The key that contains the OAuth 2.0 credentials in the specified Secret object.
3The URL used to fetch a token with the specified clientId and clientSecret.
4The OAuth 2.0 scopes for the authorization request. These scopes limit what data the tokens can access.
5The OAuth 2.0 authorization request parameters required for the authorization server.

Example 5. Sample YAML for TLS client authentication

The following shows sample TLS client settings for a tls Secret object named mtls-bundle in the openshift-monitoring namespace.

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: mtls-bundle
  5. namespace: openshift-monitoring
  6. data:
  7. ca.crt: <ca_cert> (1)
  8. client.crt: <client_cert> (2)
  9. client.key: <client_key> (3)
  10. type: tls
1The CA certificate in the Prometheus container with which to validate the server certificate.
2The client certificate for authentication with the server.
3The client key.

The following sample shows a tlsConfig remote write authentication configuration that uses a TLS Secret object named mtls-bundle.

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: cluster-monitoring-config
  5. namespace: openshift-monitoring
  6. data:
  7. config.yaml: |
  8. prometheusK8s:
  9. remoteWrite:
  10. - url: "https://remote-write-endpoint.example.com"
  11. tlsConfig:
  12. ca:
  13. secret:
  14. name: mtls-bundle (1)
  15. key: ca.crt (2)
  16. cert:
  17. secret:
  18. name: mtls-bundle (1)
  19. key: client.crt (3)
  20. keySecret:
  21. name: mtls-bundle (1)
  22. key: client.key (4)
1The name of the corresponding Secret object that contains the TLS authentication credentials. Note that ca and cert can alternatively refer to a ConfigMap object, though keySecret must refer to a Secret object.
2The key in the specified Secret object that contains the CA certificate for the endpoint.
3The key in the specified Secret object that contains the client certificate for the endpoint.
4The key in the specified Secret object that contains the client key secret.

Additional resources

Adding cluster ID labels to metrics

If you manage multiple OKD clusters and use the remote write feature to send metrics data from these clusters to an external storage location, you can add cluster ID labels to identify the metrics data coming from different clusters. You can then query these labels to identify the source cluster for a metric and distinguish that data from similar metrics data sent by other clusters.

This way, if you manage many clusters for multiple customers and send metrics data to a single centralized storage system, you can use cluster ID labels to query metrics for a particular cluster or customer.

Creating and using cluster ID labels involves three general steps:

  • Configuring the write relabel settings for remote write storage.

  • Adding cluster ID labels to the metrics.

  • Querying these labels to identify the source cluster or customer for a metric.

Creating cluster ID labels for metrics

You can create cluster ID labels for metrics for default platform monitoring and for user workload monitoring.

For default platform monitoring, you add cluster ID labels for metrics in the write_relabel settings for remote write storage in the cluster-monitoring-config config map in the openshift-monitoring namespace.

For user workload monitoring, you edit the settings in the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace.

Prerequisites

  • If you are configuring default platform monitoring components:

    • You have access to the cluster as a user with the cluster-admin cluster role.

    • You have created the cluster-monitoring-config ConfigMap object.

  • If you are configuring components that monitor user-defined projects:

    • You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

    • You have created the user-workload-monitoring-config ConfigMap object.

  • You have installed the OpenShift CLI (oc).

  • You have configured remote write storage.

Procedure

  1. Edit the ConfigMap object:

    • To create cluster ID labels for core OKD metrics:

      1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

        1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
      2. In the writeRelabelConfigs: section under data/config.yaml/prometheusK8s/remoteWrite, add cluster ID relabel configuration values:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. prometheusK8s:
        9. remoteWrite:
        10. - url: "https://remote-write-endpoint.example.com"
        11. <endpoint_authentication_credentials>
        12. writeRelabelConfigs: (1)
        13. - <relabel_config> (2)
        1Add a list of write relabel configurations for metrics that you want to send to the remote endpoint.
        2Substitute the label configuration for the metrics sent to the remote write endpoint.

        The following sample shows how to forward a metric with the cluster ID label cluster_id in default platform monitoring:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. prometheusK8s:
        9. remoteWrite:
        10. - url: "https://remote-write-endpoint.example.com"
        11. writeRelabelConfigs:
        12. - sourceLabels:
        13. - __tmp_openshift_cluster_id__ (1)
        14. targetLabel: cluster_id (2)
        15. action: replace (3)
        1The system initially applies a temporary cluster ID source label named tmp_openshift_cluster_id. This temporary label gets replaced by the cluster ID label name that you specify.
        2Specify the name of the cluster ID label for metrics sent to remote write storage. If you use a label name that already exists for a metric, that value is overwritten with the name of this cluster ID label. For the label name, do not use tmp_openshift_cluster_id. The final relabeling step removes labels that use this name.
        3The replace write relabel action replaces the temporary label with the target label for outgoing metrics. This action is the default and is applied if no action is specified.
    • To create cluster ID labels for user-defined project metrics:

      1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

        1. $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
      2. In the writeRelabelConfigs: section under data/config.yaml/prometheus/remoteWrite, add cluster ID relabel configuration values:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. prometheus:
        9. remoteWrite:
        10. - url: "https://remote-write-endpoint.example.com"
        11. <endpoint_authentication_credentials>
        12. writeRelabelConfigs: (1)
        13. - <relabel_config> (2)
        1Add a list of write relabel configurations for metrics that you want to send to the remote endpoint.
        2Substitute the label configuration for the metrics sent to the remote write endpoint.

        The following sample shows how to forward a metric with the cluster ID label cluster_id in user-workload monitoring:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. prometheus:
        9. remoteWrite:
        10. - url: "https://remote-write-endpoint.example.com"
        11. writeRelabelConfigs:
        12. - sourceLabels:
        13. - __tmp_openshift_cluster_id__ (1)
        14. targetLabel: cluster_id (2)
        15. action: replace (3)
        1The system initially applies a temporary cluster ID source label named tmp_openshift_cluster_id. This temporary label gets replaced by the cluster ID label name that you specify.
        2Specify the name of the cluster ID label for metrics sent to remote write storage. If you use a label name that already exists for a metric, that value is overwritten with the name of this cluster ID label. For the label name, do not use tmp_openshift_cluster_id. The final relabeling step removes labels that use this name.
        3The replace write relabel action replaces the temporary label with the target label for outgoing metrics. This action is the default and is applied if no action is specified.
  1. Save the file to apply the changes to the ConfigMap object. The pods affected by the updated configuration automatically restart.

    Saving changes to a monitoring ConfigMap object might redeploy the pods and other resources in the related project. Saving changes might also restart the running monitoring processes in that project.

Additional resources

Configuring metrics collection profiles

Using a metrics collection profile is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview.

By default, Prometheus collects metrics exposed by all default metrics targets in OKD components. However, you might want Prometheus to collect fewer metrics from a cluster in certain scenarios:

  • If cluster administrators require only alert, telemetry, and console metrics and do not require other metrics to be available.

  • If a cluster increases in size, and the increased size of the default metrics data collected now requires a significant increase in CPU and memory resources.

You can use a metrics collection profile to collect either the default amount of metrics data or a minimal amount of metrics data. When you collect minimal metrics data, basic monitoring features such as alerting continue to work. At the same time, the CPU and memory resources required by Prometheus decrease.

About metrics collection profiles

You can enable one of two metrics collection profiles:

  • full: Prometheus collects metrics data exposed by all platform components. This setting is the default.

  • minimal: Prometheus collects only the metrics data required for platform alerts, recording rules, telemetry, and console dashboards.

Choosing a metrics collection profile

To choose a metrics collection profile for core OKD monitoring components, edit the cluster-monitoring-config ConfigMap object.

Prerequisites

  • You have installed the OpenShift CLI (oc).

  • You have enabled Technology Preview features by using the FeatureGate custom resource (CR).

  • You have created the cluster-monitoring-config ConfigMap object.

  • You have access to the cluster as a user with the cluster-admin cluster role.

Saving changes to a monitoring config map might restart monitoring processes and redeploy the pods and other resources in the related project. The running monitoring processes in that project might also restart.

Procedure

  1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

    1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add the metrics collection profile setting under data/config.yaml/prometheusK8s:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: cluster-monitoring-config
    5. namespace: openshift-monitoring
    6. data:
    7. config.yaml: |
    8. prometheusK8s:
    9. collectionProfile: <metrics_collection_profile_name> (1)
    1The name of the metrics collection profile. The available values are full or minimal. If you do not specify a value or if the collectionProfile key name does not exist in the config map, the default setting of full is used.

    The following example sets the metrics collection profile to minimal for the core platform instance of Prometheus:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: cluster-monitoring-config
    5. namespace: openshift-monitoring
    6. data:
    7. config.yaml: |
    8. prometheusK8s:
    9. collectionProfile: minimal
  3. Save the file to apply the changes. The pods affected by the new configuration restart automatically.

Additional resources

Controlling the impact of unbound metrics attributes in user-defined projects

Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, a customer_id attribute is unbound because it has an infinite number of possible values.

Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels can result in an exponential increase in the number of time series created. This can impact Prometheus performance and can consume a lot of disk space.

Cluster administrators can use the following measures to control the impact of unbound metrics attributes in user-defined projects:

  • Limit the number of samples that can be accepted per target scrape in user-defined projects

  • Limit the number of scraped labels, the length of label names, and the length of label values

  • Create alerts that fire when a scrape sample threshold is reached or when the target cannot be scraped

Limiting scrape samples can help prevent the issues caused by adding many unbound attributes to labels. Developers can also prevent the underlying cause by limiting the number of unbound attributes that they define for metrics. Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations.

Setting scrape sample and label limits for user-defined projects

You can limit the number of samples that can be accepted per target scrape in user-defined projects. You can also limit the number of scraped labels, the length of label names, and the length of label values.

If you set sample or label limits, no further sample data is ingested for that target scrape after the limit is reached.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

  • You have enabled monitoring for user-defined projects.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

    1. $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. Add the enforcedSampleLimit configuration to data/config.yaml to limit the number of samples that can be accepted per target scrape in user-defined projects:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: user-workload-monitoring-config
    5. namespace: openshift-user-workload-monitoring
    6. data:
    7. config.yaml: |
    8. prometheus:
    9. enforcedSampleLimit: 50000 (1)
    1A value is required if this parameter is specified. This enforcedSampleLimit example limits the number of samples that can be accepted per target scrape in user-defined projects to 50,000.
  3. Add the enforcedLabelLimit, enforcedLabelNameLengthLimit, and enforcedLabelValueLengthLimit configurations to data/config.yaml to limit the number of scraped labels, the length of label names, and the length of label values in user-defined projects:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: user-workload-monitoring-config
    5. namespace: openshift-user-workload-monitoring
    6. data:
    7. config.yaml: |
    8. prometheus:
    9. enforcedLabelLimit: 500 (1)
    10. enforcedLabelNameLengthLimit: 50 (2)
    11. enforcedLabelValueLengthLimit: 600 (3)
    1Specifies the maximum number of labels per scrape. The default value is 0, which specifies no limit.
    2Specifies the maximum length in characters of a label name. The default value is 0, which specifies no limit.
    3Specifies the maximum length in characters of a label value. The default value is 0, which specifies no limit.
  4. Save the file to apply the changes. The limits are applied automatically.

    Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects.

    When changes are saved to the user-workload-monitoring-config ConfigMap object, the pods and other resources in the openshift-user-workload-monitoring project might be redeployed. The running monitoring processes in that project might also be restarted.

Creating scrape sample alerts

You can create alerts that notify you when:

  • The target cannot be scraped or is not available for the specified for duration

  • A scrape sample threshold is reached or is exceeded for the specified for duration

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

  • You have enabled monitoring for user-defined projects.

  • You have created the user-workload-monitoring-config ConfigMap object.

  • You have limited the number of samples that can be accepted per target scrape in user-defined projects, by using enforcedSampleLimit.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create a YAML file with alerts that inform you when the targets are down and when the enforced sample limit is approaching. The file in this example is called monitoring-stack-alerts.yaml:

    1. apiVersion: monitoring.coreos.com/v1
    2. kind: PrometheusRule
    3. metadata:
    4. labels:
    5. prometheus: k8s
    6. role: alert-rules
    7. name: monitoring-stack-alerts (1)
    8. namespace: ns1 (2)
    9. spec:
    10. groups:
    11. - name: general.rules
    12. rules:
    13. - alert: TargetDown (3)
    14. annotations:
    15. message: '{{ printf "%.4g" $value }}% of the {{ $labels.job }}/{{ $labels.service
    16. }} targets in {{ $labels.namespace }} namespace are down.' (4)
    17. expr: 100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job,
    18. namespace, service)) > 10
    19. for: 10m (5)
    20. labels:
    21. severity: warning (6)
    22. - alert: ApproachingEnforcedSamplesLimit (7)
    23. annotations:
    24. message: '{{ $labels.container }} container of the {{ $labels.pod }} pod in the {{ $labels.namespace }} namespace consumes {{ $value | humanizePercentage }} of the samples limit budget.' (8)
    25. expr: scrape_samples_scraped/50000 > 0.8 (9)
    26. for: 10m (10)
    27. labels:
    28. severity: warning (11)
    1Defines the name of the alerting rule.
    2Specifies the user-defined project where the alerting rule will be deployed.
    3The TargetDown alert will fire if the target cannot be scraped or is not available for the for duration.
    4The message that will be output when the TargetDown alert fires.
    5The conditions for the TargetDown alert must be true for this duration before the alert is fired.
    6Defines the severity for the TargetDown alert.
    7The ApproachingEnforcedSamplesLimit alert will fire when the defined scrape sample threshold is reached or exceeded for the specified for duration.
    8The message that will be output when the ApproachingEnforcedSamplesLimit alert fires.
    9The threshold for the ApproachingEnforcedSamplesLimit alert. In this example the alert will fire when the number of samples per target scrape has exceeded 80% of the enforced sample limit of 50000. The for duration must also have passed before the alert will fire. The <number> in the expression scrape_samples_scraped/<number> > <threshold> must match the enforcedSampleLimit value defined in the user-workload-monitoring-config ConfigMap object.
    10The conditions for the ApproachingEnforcedSamplesLimit alert must be true for this duration before the alert is fired.
    11Defines the severity for the ApproachingEnforcedSamplesLimit alert.
  2. Apply the configuration to the user-defined project:

    1. $ oc apply -f monitoring-stack-alerts.yaml

Additional resources

Configuring external Alertmanager instances

The OKD monitoring stack includes a local Alertmanager instance that routes alerts from Prometheus. You can add external Alertmanager instances to route alerts for core OKD projects or user-defined projects.

If you add the same external Alertmanager configuration for multiple clusters and disable the local instance for each cluster, you can then manage alert routing for multiple clusters by using a single external Alertmanager instance.

Prerequisites

  • If you are configuring core OKD monitoring components in the openshift-monitoring project:

    • You have access to the cluster as a user with the cluster-admin cluster role.

    • You have created the cluster-monitoring-config config map.

  • If you are configuring components that monitor user-defined projects:

    • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

    • You have created the user-workload-monitoring-config config map.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the ConfigMap object.

    • To configure additional Alertmanagers for routing alerts from core OKD projects:

      1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

        1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
      2. Add an additionalAlertmanagerConfigs: section under data/config.yaml/prometheusK8s.

      3. Add the configuration details for additional Alertmanagers in this section:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. prometheusK8s:
        9. additionalAlertmanagerConfigs:
        10. - <alertmanager_specification>

        For <alertmanager_specification>, substitute authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token (bearerToken) and client TLS (tlsConfig). The following sample config map configures an additional Alertmanager using a bearer token with client TLS authentication:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. prometheusK8s:
        9. additionalAlertmanagerConfigs:
        10. - scheme: https
        11. pathPrefix: /
        12. timeout: "30s"
        13. apiVersion: v1
        14. bearerToken:
        15. name: alertmanager-bearer-token
        16. key: token
        17. tlsConfig:
        18. key:
        19. name: alertmanager-tls
        20. key: tls.key
        21. cert:
        22. name: alertmanager-tls
        23. key: tls.crt
        24. ca:
        25. name: alertmanager-tls
        26. key: tls.ca
        27. staticConfigs:
        28. - external-alertmanager1-remote.com
        29. - external-alertmanager1-remote2.com
    • To configure additional Alertmanager instances for routing alerts from user-defined projects:

      1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project:

        1. $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
      2. Add a <component>/additionalAlertmanagerConfigs: section under data/config.yaml/.

      3. Add the configuration details for additional Alertmanagers in this section:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. <component>:
        9. additionalAlertmanagerConfigs:
        10. - <alertmanager_specification>

        For <component>, substitute one of two supported external Alertmanager components: prometheus or thanosRuler.

        For <alertmanager_specification>, substitute authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token (bearerToken) and client TLS (tlsConfig). The following sample config map configures an additional Alertmanager using Thanos Ruler with a bearer token and client TLS authentication:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. thanosRuler:
        9. additionalAlertmanagerConfigs:
        10. - scheme: https
        11. pathPrefix: /
        12. timeout: "30s"
        13. apiVersion: v1
        14. bearerToken:
        15. name: alertmanager-bearer-token
        16. key: token
        17. tlsConfig:
        18. key:
        19. name: alertmanager-tls
        20. key: tls.key
        21. cert:
        22. name: alertmanager-tls
        23. key: tls.crt
        24. ca:
        25. name: alertmanager-tls
        26. key: tls.ca
        27. staticConfigs:
        28. - external-alertmanager1-remote.com
        29. - external-alertmanager1-remote2.com
  1. Save the file to apply the changes to the ConfigMap object. The new component placement configuration is applied automatically.

    Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects.

  2. Save the file to apply the changes to the ConfigMap object. The new component placement configuration is applied automatically.

Configuring secrets for Alertmanager

The OKD monitoring stack includes Alertmanager, which routes alerts from Prometheus to endpoint receivers. If you need to authenticate with a receiver so that Alertmanager can send alerts to it, you can configure Alertmanager to use a secret that contains authentication credentials for the receiver.

For example, you can configure Alertmanager to use a secret to authenticate with an endpoint receiver that requires a certificate issued by a private Certificate Authority (CA). You can also configure Alertmanager to use a secret to authenticate with a receiver that requires a password file for Basic HTTP authentication. In either case, authentication details are contained in the Secret object rather than in the ConfigMap object.

Adding a secret to the Alertmanager configuration

You can add secrets to the Alertmanager configuration for core platform monitoring components by editing the cluster-monitoring-config config map in the openshift-monitoring project.

After you add a secret to the config map, the secret is mounted as a volume at /etc/alertmanager/secrets/<secret_name> within the alertmanager container for the Alertmanager pods.

Prerequisites

  • If you are configuring core OKD monitoring components in the openshift-monitoring project:

    • You have access to the cluster as a user with the cluster-admin cluster role.

    • You have created the cluster-monitoring-config config map.

    • You have created the secret to be configured in Alertmanager in the openshift-monitoring project.

  • If you are configuring components that monitor user-defined projects:

    • A cluster administrator has enabled monitoring for user-defined projects.

    • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

    • You have created the secret to be configured in Alertmanager in the openshift-user-workload-monitoring project.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the ConfigMap object.

    • To add a secret configuration to Alertmanager for core platform monitoring:

      1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

        1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
      2. Add a secrets: section under data/config.yaml/alertmanagerMain with the following configuration:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. alertmanagerMain:
        9. secrets: (1)
        10. - <secret_name_1> (2)
        11. - <secret_name_2>
        1This section contains the secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object.
        2The name of the Secret object that contains authentication credentials for the receiver. If you add multiple secrets, place each one on a new line.

        The following sample config map settings configure Alertmanager to use two Secret objects named test-secret-basic-auth and test-secret-api-token:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. alertmanagerMain:
        9. secrets:
        10. - test-secret-basic-auth
        11. - test-secret-api-token
    • To add a secret configuration to Alertmanager for user-defined project monitoring:

      1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project:

        1. $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
      2. Add a secrets: section under data/config.yaml/alertmanager/secrets with the following configuration:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. alertmanager:
        9. secrets: (1)
        10. - <secret_name_1> (2)
        11. - <secret_name_2>
        1This section contains the secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object.
        2The name of the Secret object that contains authentication credentials for the receiver. If you add multiple secrets, place each one on a new line.

        The following sample config map settings configure Alertmanager to use two Secret objects named test-secret and test-secret-api-token:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. alertmanager:
        9. enabled: true
        10. secrets:
        11. - test-secret
        12. - test-api-receiver-token

        Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects.

  1. Save the file to apply the changes to the ConfigMap object. The new configuration is applied automatically.

Attaching additional labels to your time series and alerts

Using the external labels feature of Prometheus, you can attach custom labels to all time series and alerts leaving Prometheus.

Prerequisites

  • If you are configuring core OKD monitoring components:

    • You have access to the cluster as a user with the cluster-admin cluster role.

    • You have created the cluster-monitoring-config ConfigMap object.

  • If you are configuring components that monitor user-defined projects:

    • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

    • You have created the user-workload-monitoring-config ConfigMap object.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the ConfigMap object:

    • To attach custom labels to all time series and alerts leaving the Prometheus instance that monitors core OKD projects:

      1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

        1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
      2. Define a map of labels you want to add for every metric under data/config.yaml:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. prometheusK8s:
        9. externalLabels:
        10. <key>: <value> (1)
        1Substitute <key>: <value> with a map of key-value pairs where <key> is a unique name for the new label and <value> is its value.

        Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten.

        For example, to add metadata about the region and environment to all time series and alerts, use:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. prometheusK8s:
        9. externalLabels:
        10. region: eu
        11. environment: prod
    • To attach custom labels to all time series and alerts leaving the Prometheus instance that monitors user-defined projects:

      1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

        1. $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
      2. Define a map of labels you want to add for every metric under data/config.yaml:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. prometheus:
        9. externalLabels:
        10. <key>: <value> (1)
        1Substitute <key>: <value> with a map of key-value pairs where <key> is a unique name for the new label and <value> is its value.

        Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten.

        In the openshift-user-workload-monitoring project, Prometheus handles metrics and Thanos Ruler handles alerting and recording rules. Setting externalLabels for prometheus in the user-workload-monitoring-config ConfigMap object will only configure external labels for metrics and not for any rules.

        For example, to add metadata about the region and environment to all time series and alerts related to user-defined projects, use:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. prometheus:
        9. externalLabels:
        10. region: eu
        11. environment: prod
  1. Save the file to apply the changes. The new configuration is applied automatically.

    Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects.

    When changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.

Additional resources

Configuring pod topology spread constraints for monitoring

You can use pod topology spread constraints to control how Prometheus, Thanos Ruler, and Alertmanager pods are spread across a network topology when OKD pods are deployed in multiple availability zones.

Pod topology spread constraints are suitable for controlling pod scheduling within hierarchical topologies in which nodes are spread across different infrastructure levels, such as regions and zones within those regions. Additionally, by being able to schedule pods in different zones, you can improve network latency in certain scenarios.

Additional resources

Setting up pod topology spread constraints for Prometheus

For core OKD platform monitoring, you can set up pod topology spread constraints for Prometheus to fine tune how pod replicas are scheduled to nodes across zones. Doing so helps ensure that Prometheus pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones.

You configure pod topology spread constraints for Prometheus in the cluster-monitoring-config config map.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.

  • You have created the cluster-monitoring-config ConfigMap object.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace:

    1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add values for the following settings under data/config.yaml/prometheusK8s to configure pod topology spread constraints:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: cluster-monitoring-config
    5. namespace: openshift-monitoring
    6. data:
    7. config.yaml: |
    8. prometheusK8s:
    9. topologySpreadConstraints:
    10. - maxSkew: 1 (1)
    11. topologyKey: monitoring (2)
    12. whenUnsatisfiable: DoNotSchedule (3)
    13. labelSelector:
    14. matchLabels: (4)
    15. app.kubernetes.io/name: prometheus
    1Specify a numeric value for maxSkew, which defines the degree to which pods are allowed to be unevenly distributed. This field is required, and the value must be greater than zero. The value specified has a different effect depending on what value you specify for whenUnsatisfiable.
    2Specify a key of node labels for topologyKey. This field is required. Nodes that have a label with this key and identical values are considered to be in the same topology. The scheduler will try to put a balanced number of pods into each domain.
    3Specify a value for whenUnsatisfiable. This field is required. Available options are DoNotSchedule and ScheduleAnyway. Specify DoNotSchedule if you want the maxSkew value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum. Specify ScheduleAnyway if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew.
    4Specify a value for matchLabels. This value is used to identify the set of matching pods to which to apply the constraints.
  3. Save the file to apply the changes automatically.

    When you save changes to the cluster-monitoring-config config map, the pods and other resources in the openshift-monitoring project might be redeployed. The running monitoring processes in that project might also restart.

Setting up pod topology spread constraints for Alertmanager

For core OKD platform monitoring, you can set up pod topology spread constraints for Alertmanager to fine tune how pod replicas are scheduled to nodes across zones. Doing so helps ensure that Alertmanager pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones.

You configure pod topology spread constraints for Alertmanager in the cluster-monitoring-config config map.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.

  • You have created the cluster-monitoring-config ConfigMap object.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace:

    1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add values for the following settings under data/config.yaml/alertmanagermain to configure pod topology spread constraints:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: cluster-monitoring-config
    5. namespace: openshift-monitoring
    6. data:
    7. config.yaml: |
    8. alertmanagerMain:
    9. topologySpreadConstraints:
    10. - maxSkew: 1 (1)
    11. topologyKey: monitoring (2)
    12. whenUnsatisfiable: DoNotSchedule (3)
    13. labelSelector:
    14. matchLabels: (4)
    15. app.kubernetes.io/name: alertmanager
    1Specify a numeric value for maxSkew, which defines the degree to which pods are allowed to be unevenly distributed. This field is required, and the value must be greater than zero. The value specified has a different effect depending on what value you specify for whenUnsatisfiable.
    2Specify a key of node labels for topologyKey. This field is required. Nodes that have a label with this key and identical values are considered to be in the same topology. The scheduler will try to put a balanced number of pods into each domain.
    3Specify a value for whenUnsatisfiable. This field is required. Available options are DoNotSchedule and ScheduleAnyway. Specify DoNotSchedule if you want the maxSkew value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum. Specify ScheduleAnyway if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew.
    4Specify a value for matchLabels. This value is used to identify the set of matching pods to which to apply the constraints.
  3. Save the file to apply the changes automatically.

    When you save changes to the cluster-monitoring-config config map, the pods and other resources in the openshift-monitoring project might be redeployed. The running monitoring processes in that project might also restart.

Setting up pod topology spread constraints for Thanos Ruler

For user-defined monitoring, you can set up pod topology spread constraints for Thanos Ruler to fine tune how pod replicas are scheduled to nodes across zones. Doing so helps ensure that Thanos Ruler pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones.

You configure pod topology spread constraints for Thanos Ruler in the user-workload-monitoring-config config map.

Prerequisites

  • A cluster administrator has enabled monitoring for user-defined projects.

  • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

  • You have created the user-workload-monitoring-config ConfigMap object.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace:

    1. $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. Add values for the following settings under data/config.yaml/thanosRuler to configure pod topology spread constraints:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: user-workload-monitoring-config
    5. namespace: openshift-user-workload-monitoring
    6. data:
    7. config.yaml: |
    8. thanosRuler:
    9. topologySpreadConstraints:
    10. - maxSkew: 1 (1)
    11. topologyKey: monitoring (2)
    12. whenUnsatisfiable: ScheduleAnyway (3)
    13. labelSelector:
    14. matchLabels: (4)
    15. app.kubernetes.io/name: thanos-ruler
    1Specify a numeric value for maxSkew, which defines the degree to which pods are allowed to be unevenly distributed. This field is required, and the value must be greater than zero. The value specified has a different effect depending on what value you specify for whenUnsatisfiable.
    2Specify a key of node labels for topologyKey. This field is required. Nodes that have a label with this key and identical values are considered to be in the same topology. The scheduler will try to put a balanced number of pods into each domain.
    3Specify a value for whenUnsatisfiable. This field is required. Available options are DoNotSchedule and ScheduleAnyway. Specify DoNotSchedule if you want the maxSkew value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum. Specify ScheduleAnyway if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew.
    4Specify a value for matchLabels. This value is used to identify the set of matching pods to which to apply the constraints.
  3. Save the file to apply the changes automatically.

    When you save changes to the user-workload-monitoring-config config map, the pods and other resources in the openshift-user-workload-monitoring project might be redeployed. The running monitoring processes in that project might also restart.

Setting log levels for monitoring components

You can configure the log level for Alertmanager, Prometheus Operator, Prometheus, Thanos Querier, and Thanos Ruler.

The following log levels can be applied to the relevant component in the cluster-monitoring-config and user-workload-monitoring-config ConfigMap objects:

  • debug. Log debug, informational, warning, and error messages.

  • info. Log informational, warning, and error messages.

  • warn. Log warning and error messages only.

  • error. Log error messages only.

The default log level is info.

Prerequisites

  • If you are setting a log level for Alertmanager, Prometheus Operator, Prometheus, or Thanos Querier in the openshift-monitoring project:

    • You have access to the cluster as a user with the cluster-admin cluster role.

    • You have created the cluster-monitoring-config ConfigMap object.

  • If you are setting a log level for Prometheus Operator, Prometheus, or Thanos Ruler in the openshift-user-workload-monitoring project:

    • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

    • You have created the user-workload-monitoring-config ConfigMap object.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the ConfigMap object:

    • To set a log level for a component in the openshift-monitoring project:

      1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

        1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
      2. Add logLevel: <log_level> for a component under data/config.yaml:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: cluster-monitoring-config
        5. namespace: openshift-monitoring
        6. data:
        7. config.yaml: |
        8. <component>: (1)
        9. logLevel: <log_level> (2)
        1The monitoring stack component for which you are setting a log level. For default platform monitoring, available component values are prometheusK8s, alertmanagerMain, prometheusOperator, and thanosQuerier.
        2The log level to set for the component. The available values are error, warn, info, and debug. The default value is info.
    • To set a log level for a component in the openshift-user-workload-monitoring project:

      1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

        1. $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
      2. Add logLevel: <log_level> for a component under data/config.yaml:

        1. apiVersion: v1
        2. kind: ConfigMap
        3. metadata:
        4. name: user-workload-monitoring-config
        5. namespace: openshift-user-workload-monitoring
        6. data:
        7. config.yaml: |
        8. <component>: (1)
        9. logLevel: <log_level> (2)
        1The monitoring stack component for which you are setting a log level. For user workload monitoring, available component values are alertmanager, prometheus, prometheusOperator, and thanosRuler.
        2The log level to apply to the component. The available values are error, warn, info, and debug. The default value is info.
  1. Save the file to apply the changes. The pods for the component restart automatically when you apply the log-level change.

    Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects.

    When changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.

  2. Confirm that the log-level has been applied by reviewing the deployment or pod configuration in the related project. The following example checks the log level in the prometheus-operator deployment in the openshift-user-workload-monitoring project:

    1. $ oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level"

    Example output

    1. - --log-level=debug
  3. Check that the pods for the component are running. The following example lists the status of pods in the openshift-user-workload-monitoring project:

    1. $ oc -n openshift-user-workload-monitoring get pods

    If an unrecognized logLevel value is included in the ConfigMap object, the pods for the component might not restart successfully.

Enabling the query log file for Prometheus

You can configure Prometheus to write all queries that have been run by the engine to a log file. You can do so for default platform monitoring and for user-defined workload monitoring.

Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature.

Prerequisites

  • If you are enabling the query log file feature for Prometheus in the openshift-monitoring project:

    • You have access to the cluster as a user with the cluster-admin cluster role.

    • You have created the cluster-monitoring-config ConfigMap object.

  • If you are enabling the query log file feature for Prometheus in the openshift-user-workload-monitoring project:

    • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.

    • You have created the user-workload-monitoring-config ConfigMap object.

  • You have installed the OpenShift CLI (oc).

Procedure

  • To set the query log file for Prometheus in the openshift-monitoring project:

    1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

      1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
    2. Add queryLogFile: <path> for prometheusK8s under data/config.yaml:

      1. apiVersion: v1
      2. kind: ConfigMap
      3. metadata:
      4. name: cluster-monitoring-config
      5. namespace: openshift-monitoring
      6. data:
      7. config.yaml: |
      8. prometheusK8s:
      9. queryLogFile: <path> (1)
      1The full path to the file in which queries will be logged.
    3. Save the file to apply the changes.

      When you save changes to a monitoring config map, pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.

    4. Verify that the pods for the component are running. The following sample command lists the status of pods in the openshift-monitoring project:

      1. $ oc -n openshift-monitoring get pods
    5. Read the query log:

      1. $ oc -n openshift-monitoring exec prometheus-k8s-0 -- cat <path>

      Revert the setting in the config map after you have examined the logged query information.

  • To set the query log file for Prometheus in the openshift-user-workload-monitoring project:

    1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

      1. $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
    2. Add queryLogFile: <path> for prometheus under data/config.yaml:

      1. apiVersion: v1
      2. kind: ConfigMap
      3. metadata:
      4. name: user-workload-monitoring-config
      5. namespace: openshift-user-workload-monitoring
      6. data:
      7. config.yaml: |
      8. prometheus:
      9. queryLogFile: <path> (1)
      1The full path to the file in which queries will be logged.
    3. Save the file to apply the changes.

      Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects.

      When you save changes to a monitoring config map, pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.

    4. Verify that the pods for the component are running. The following example command lists the status of pods in the openshift-user-workload-monitoring project:

      1. $ oc -n openshift-user-workload-monitoring get pods
    5. Read the query log:

      1. $ oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path>

      Revert the setting in the config map after you have examined the logged query information.

Additional resources

Enabling query logging for Thanos Querier

For default platform monitoring in the openshift-monitoring project, you can enable the Cluster Monitoring Operator to log all queries run by Thanos Querier.

Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature.

Prerequisites

  • You have installed the OpenShift CLI (oc).

  • You have access to the cluster as a user with the cluster-admin cluster role.

  • You have created the cluster-monitoring-config ConfigMap object.

Procedure

You can enable query logging for Thanos Querier in the openshift-monitoring project:

  1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

    1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add a thanosQuerier section under data/config.yaml and add values as shown in the following example:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: cluster-monitoring-config
    5. namespace: openshift-monitoring
    6. data:
    7. config.yaml: |
    8. thanosQuerier:
    9. enableRequestLogging: <value> (1)
    10. logLevel: <value> (2)
    1Set the value to true to enable logging and false to disable logging. The default value is false.
    2Set the value to debug, info, warn, or error. If no value exists for logLevel, the log level defaults to error.
  3. Save the file to apply the changes.

    When you save changes to a monitoring config map, pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.

Verification

  1. Verify that the Thanos Querier pods are running. The following sample command lists the status of pods in the openshift-monitoring project:

    1. $ oc -n openshift-monitoring get pods
  2. Run a test query using the following sample commands as a model:

    1. $ token=`oc create token prometheus-k8s -n openshift-monitoring`
    2. $ oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $token" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version'
  3. Run the following command to read the query log:

    1. $ oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query

    Because the thanos-querier pods are highly available (HA) pods, you might be able to see logs in only one pod.

  4. After you examine the logged query information, disable query logging by changing the enableRequestLogging value to false in the config map.

Additional resources

Setting audit log levels for the Prometheus Adapter

In default platform monitoring, you can configure the audit log level for the Prometheus Adapter.

Prerequisites

  • You have installed the OpenShift CLI (oc).

  • You have access to the cluster as a user with the cluster-admin cluster role.

  • You have created the cluster-monitoring-config ConfigMap object.

Procedure

You can set an audit log level for the Prometheus Adapter in the default openshift-monitoring project:

  1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

    1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add profile: in the k8sPrometheusAdapter/audit section under data/config.yaml:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: cluster-monitoring-config
    5. namespace: openshift-monitoring
    6. data:
    7. config.yaml: |
    8. k8sPrometheusAdapter:
    9. audit:
    10. profile: <audit_log_level> (1)
    1The audit log level to apply to the Prometheus Adapter.
  3. Set the audit log level by using one of the following values for the profile: parameter:

    • None: Do not log events.

    • Metadata: Log only the metadata for the request, such as user, timestamp, and so forth. Do not log the request text and the response text. Metadata is the default audit log level.

    • Request: Log only the metadata and the request text but not the response text. This option does not apply for non-resource requests.

    • RequestResponse: Log event metadata, request text, and response text. This option does not apply for non-resource requests.

  4. Save the file to apply the changes. The pods for the Prometheus Adapter restart automatically when you apply the change.

    When changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.

Verification

  1. In the config map, under k8sPrometheusAdapter/audit/profile, set the log level to Request and save the file.

  2. Confirm that the pods for the Prometheus Adapter are running. The following example lists the status of pods in the openshift-monitoring project:

    1. $ oc -n openshift-monitoring get pods
  3. Confirm that the audit log level and audit log file path are correctly configured:

    1. $ oc -n openshift-monitoring get deploy prometheus-adapter -o yaml

    Example output

    1. ...
    2. - --audit-policy-file=/etc/audit/request-profile.yaml
    3. - --audit-log-path=/var/log/adapter/audit.log
  4. Confirm that the correct log level has been applied in the prometheus-adapter deployment in the openshift-monitoring project:

    1. $ oc -n openshift-monitoring exec deploy/prometheus-adapter -c prometheus-adapter -- cat /etc/audit/request-profile.yaml

    Example output

    1. "apiVersion": "audit.k8s.io/v1"
    2. "kind": "Policy"
    3. "metadata":
    4. "name": "Request"
    5. "omitStages":
    6. - "RequestReceived"
    7. "rules":
    8. - "level": "Request"

    If you enter an unrecognized profile value for the Prometheus Adapter in the ConfigMap object, no changes are made to the Prometheus Adapter, and an error is logged by the Cluster Monitoring Operator.

  5. Review the audit log for the Prometheus Adapter:

    1. $ oc -n openshift-monitoring exec -c <prometheus_adapter_pod_name> -- cat /var/log/adapter/audit.log

Additional resources

Disabling the local Alertmanager

A local Alertmanager that routes alerts from Prometheus instances is enabled by default in the openshift-monitoring project of the OKD monitoring stack.

If you do not need the local Alertmanager, you can disable it by configuring the cluster-monitoring-config config map in the openshift-monitoring project.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.

  • You have created the cluster-monitoring-config config map.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    1. $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add enabled: false for the alertmanagerMain component under data/config.yaml:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: cluster-monitoring-config
    5. namespace: openshift-monitoring
    6. data:
    7. config.yaml: |
    8. alertmanagerMain:
    9. enabled: false
  3. Save the file to apply the changes. The Alertmanager instance is disabled automatically when you apply the change.

Additional resources

Next steps