The Alertmanager Config Secret contains the configuration of an Alertmanager instance that sends out notifications based on alerts it receives from Prometheus.

Overview

By default, Rancher Monitoring deploys a single Alertmanager onto a cluster that uses a default Alertmanager Config Secret. As part of the chart deployment options, you can opt to increase the number of replicas of the Alertmanager deployed onto your cluster that can all be managed using the same underlying Alertmanager Config Secret.

This Secret should be updated or modified any time you want to:

  • Add in new notifiers or receivers
  • Change the alerts that should be sent to specific notifiers or receivers
  • Change the group of alerts that are sent out

By default, you can either choose to supply an existing Alertmanager Config Secret (i.e. any Secret in the cattle-monitoring-system namespace) or allow Rancher Monitoring to deploy a default Alertmanager Config Secret onto your cluster. By default, the Alertmanager Config Secret created by Rancher will never be modified / deleted on an upgrade / uninstall of the rancher-monitoring chart to prevent users from losing or overwriting their alerting configuration when executing operations on the chart.

For more information on what fields can be specified in this secret, please look at the Prometheus Alertmanager docs.

The full spec for the Alertmanager configuration file and what it takes in can be found here.

For more information, refer to the official Prometheus documentation about configuring routes.

Connecting Routes and PrometheusRules

When you define a Rule (which is declared within a RuleGroup in a PrometheusRule resource), the spec of the Rule itself contains labels that are used by Prometheus to figure out which Route should receive this Alert. For example, an Alert with the label team: front-end will be sent to all Routes that match on that label.

Creating Receivers in the Rancher UI

Available as of v2.5.4

Prerequisites:

  • The monitoring application needs to be installed.
  • If you configured monitoring with an existing Alertmanager Secret, it must have a format that is supported by Rancher’s UI. Otherwise you will only be able to make changes based on modifying the Alertmanager Secret directly. Note: We are continuing to make enhancements to what kinds of Alertmanager Configurations we can support using the Routes and Receivers UI, so please file an issue if you have a request for a feature enhancement.

To create notification receivers in the Rancher UI,

  1. Click Cluster Explorer > Monitoring and click Receiver.
  2. Enter a name for the receiver.
  3. Configure one or more providers for the receiver. For help filling out the forms, refer to the configuration options below.
  4. Click Create.

Result: Alerts can be configured to send notifications to the receiver(s).

Receiver Configuration

The notification integrations are configured with the receiver, which is explained in the Prometheus documentation.

Rancher v2.5.4 introduced the capability to configure receivers by filling out forms in the Rancher UI.

The following types of receivers can be configured in the Rancher UI:

[

The custom receiver option can be used to configure any receiver in YAML that cannot be configured by filling out the other forms in the Rancher UI.

Slack

FieldTypeDescription
URLStringEnter your Slack webhook URL. For instructions to create a Slack webhook, see the Slack documentation.
Default ChannelStringEnter the name of the channel that you want to send alert notifications in the following format: #<channelname>.
Proxy URLStringProxy for the webhook notifications.
Enable Send Resolved AlertsBoolWhether to send a follow-up notification if an alert has been resolved (e.g. [Resolved] High CPU Usage).

Email

FieldTypeDescription
Default Recipient AddressStringThe email address that will receive notifications.
Enable Send Resolved AlertsBoolWhether to send a follow-up notification if an alert has been resolved (e.g. [Resolved] High CPU Usage).

SMTP options:

FieldTypeDescription
SenderStringEnter an email address available on your SMTP mail server that you want to send the notification from.
HostStringEnter the IP address or hostname for your SMTP server. Example: smtp.email.com.
Use TLSBoolUse TLS for encryption.
UsernameStringEnter a username to authenticate with the SMTP server.
PasswordStringEnter a password to authenticate with the SMTP server.

PagerDuty

FieldTypeDescription
Integration TypeStringEvents API v2 or Prometheus.
Default Integration KeyStringFor instructions to get an integration key, see the PagerDuty documentation.
Proxy URLStringProxy for the PagerDuty notifications.
Enable Send Resolved AlertsBoolWhether to send a follow-up notification if an alert has been resolved (e.g. [Resolved] High CPU Usage).

Opsgenie

FieldDescription
API KeyFor instructions to get an API key, refer to the Opsgenie documentation.
Proxy URLProxy for the Opsgenie notifications.
Enable Send Resolved AlertsWhether to send a follow-up notification if an alert has been resolved (e.g. [Resolved] High CPU Usage).

Opsgenie Responders:

FieldTypeDescription
TypeStringSchedule, Team, User, or Escalation. For more information on alert responders, refer to the Opsgenie documentation.
Send ToStringId, Name, or Username of the Opsgenie recipient.

Webhook

FieldDescription
URLWebhook URL for the app of your choice.
Proxy URLProxy for the webhook notification.
Enable Send Resolved AlertsWhether to send a follow-up notification if an alert has been resolved (e.g. [Resolved] High CPU Usage).

Custom

The YAML provided here will be directly appended to your receiver within the Alertmanager Config Secret.

](#custom)

The Alertmanager must be configured in YAML, as shown in this example.

Route Configuration

Receiver

The route needs to refer to a receiver that has already been configured.

Grouping

FieldDefaultDescription
Group ByN/aThe labels by which incoming alerts are grouped together. For example, [ group_by: ‘[‘ <labelname>, … ‘]’ ] Multiple alerts coming in for labels such as cluster=A and alertname=LatencyHigh can be batched into a single group. To aggregate by all possible labels, use the special value ‘…’ as the sole label name, for example: group_by: [‘…’] Grouping by effectively disables aggregation entirely, passing through all alerts as-is. This is unlikely to be what you want, unless you have a very low alert volume or your upstream notification system performs its own grouping.
Group Wait30sHow long to wait to buffer alerts of the same group before sending initially.
Group Interval5mHow long to wait before sending an alert that has been added to a group of alerts for which an initial notification has already been sent.
Repeat Interval4hHow long to wait before re-sending a given alert that has already been sent.

Matching

The Match field refers to a set of equality matchers used to identify which alerts to send to a given Route based on labels defined on that alert. When you add key-value pairs to the Rancher UI, they correspond to the YAML in this format:

  1. match:
  2. [ <labelname>: <labelvalue>, ... ]

The Match Regex field refers to a set of regex-matchers used to identify which alerts to send to a given Route based on labels defined on that alert. When you add key-value pairs in the Rancher UI, they correspond to the YAML in this format:

  1. match_re:
  2. [ <labelname>: <regex>, ... ]

The Alertmanager must be configured in YAML, as shown in this example.

Example Alertmanager Config

To set up notifications via Slack, the following Alertmanager Config YAML can be placed into the alertmanager.yaml key of the Alertmanager Config Secret, where the api_url should be updated to use your Webhook URL from Slack:

  1. route:
  2. group_by: ['job']
  3. group_wait: 30s
  4. group_interval: 5m
  5. repeat_interval: 3h
  6. receiver: 'slack-notifications'
  7. receivers:
  8. - name: 'slack-notifications'
  9. slack_configs:
  10. - send_resolved: true
  11. text: '{{ template "slack.rancher.text" . }}'
  12. api_url: <user-provided slack webhook url here>
  13. templates:
  14. - /etc/alertmanager/config/*.tmpl

Example Route Config for CIS Scan Alerts

While configuring the routes for rancher-cis-benchmark alerts, you can specify the matching using the key-value pair job: rancher-cis-scan.

For example, the following example route configuration could be used with a Slack receiver named test-cis:

  1. spec:
  2. receiver: test-cis
  3. group_by:
  4. # - string
  5. group_wait: 30s
  6. group_interval: 30s
  7. repeat_interval: 30s
  8. match:
  9. job: rancher-cis-scan
  10. # key: string
  11. match_re:
  12. {}
  13. # key: string

For more information on enabling alerting for rancher-cis-benchmark, see this section.