Monitoring the Network Observability Operator

You can use the web console to monitor alerts related to the health of the Network Observability Operator.

Viewing health information

You can access metrics about health and resource usage of the Network Observability Operator from the Dashboards page in the web console. A health alert banner that directs you to the dashboard can appear on the Network Traffic and Home pages in the event that an alert is triggered. Alerts are generated in the following cases:

  • The NetObservLokiError alert occurs if the flowlogs-pipeline workload is dropping flows because of Loki errors, such as if the Loki ingestion rate limit has been reached.

  • The NetObservNoFlows alert occurs if no flows are ingested for a certain amount of time.

Prerequisites

  • You have the Network Observability Operator installed.

  • You have access to the cluster as a user with the cluster-admin role or with view permissions for all projects.

Procedure

  1. From the Administrator perspective in the web console, navigate to ObserveDashboards.

  2. From the Dashboards dropdown, select Netobserv/Health. Metrics about the health of the Operator are displayed on the page.

Disabling health alerts

You can opt out of health alerting by editing the FlowCollector resource:

  1. In the web console, navigate to OperatorsInstalled Operators.

  2. Under the Provided APIs heading for the NetObserv Operator, select Flow Collector.

  3. Select cluster then select the YAML tab.

  4. Add spec.processor.metrics.disableAlerts to disable health alerts, as in the following YAML sample:

  1. apiVersion: flows.netobserv.io/v1alpha1
  2. kind: FlowCollector
  3. metadata:
  4. name: cluster
  5. spec:
  6. processor:
  7. metrics:
  8. disableAlerts: [NetObservLokiError, NetObservNoFlows] (1)
1You can specify one or a list with both types of alerts to disable.

Creating Loki rate limit alerts for the NetObserv dashboard

You can create custom rules for the Netobserv dashboard metrics to trigger alerts when Loki rate limits have been reached.

An example of an alerting rule configuration YAML file is as follows:

  1. apiVersion: monitoring.coreos.com/v1
  2. kind: PrometheusRule
  3. metadata:
  4. name: loki-alerts
  5. namespace: openshift-operators-redhat
  6. spec:
  7. groups:
  8. - name: LokiRateLimitAlerts
  9. rules:
  10. - alert: LokiTenantRateLimit
  11. annotations:
  12. message: |-
  13. {{ $labels.job }} {{ $labels.route }} is experiencing 429 errors.
  14. summary: "At any number of requests are responded with the rate limit error code."
  15. expr: sum(irate(loki_request_duration_seconds_count{status_code="429"}[1m])) by (job, namespace, route) / sum(irate(loki_request_duration_seconds_count[1m])) by (job, namespace, route) * 100 > 0
  16. for: 10s
  17. labels:
  18. severity: warning

Additional resources