Get started with Grafana Alerting - Part 3
Get started with Grafana Alerting - Part 3
The Get started with Grafana Alerting tutorial Part 3 is a continuation of Get started with Grafana Alerting tutorial Part 2.
Alert grouping in Grafana Alerting reduces notification noise by combining related alerts into a single, concise notification. This is essential for on-call engineers, ensuring they focus on resolving incidents instead of sorting through a flood of notifications.
Grouping is configured by using labels in the notification policy that reference the labels that are generated by the alert instances. With notification policies, you can also configure how often notifications are sent for each group of alerts.
In this tutorial, you will:
- Learn how alert rule grouping works.
- Create a notification policy to handle grouping.
- Define an alert rule for a real-world scenario.
- Receive and review grouped alert notifications.
Before you begin
There are different ways you can follow along with this tutorial.
Grafana Cloud
- As a Grafana Cloud user, you don’t have to install anything. Create your free account.
Continue to How alert rule grouping works.
Interactive learning environment
- Alternatively, you can try out this example in our interactive learning environment: Get started with Grafana Alerting - Part 3. It’s a fully configured environment with all the dependencies already installed.
Grafana OSS
If you opt to run a Grafana stack locally, ensure you have the following applications installed:
Docker Compose (included in Docker for Desktop for macOS and Windows)
Set up the Grafana stack (OSS users)
To demonstrate the observation of data using the Grafana stack, download and run the following files.
Clone the tutorial environment repository.
git clone https://github.com/grafana/tutorial-environment.git
Change to the directory where you cloned the repository:
cd tutorial-environment
Run the Grafana stack:
docker compose up -d
The first time you run
docker compose up -d
, Docker downloads all the necessary resources for the tutorial. This might take a few minutes, depending on your internet connection.Note
If you already have Grafana, Loki, or Prometheus running on your system, you might see errors, because the Docker image is trying to use ports that your local installations are already using. If this is the case, stop the services, then run the command again.
How alert rule grouping works
Alert notification grouping is configured with labels and timing options:
- Labels map the alert rule with the notification policy and define the grouping.
- Timing options control when and how often notifications are sent.
Types of Labels
Reserved labels (default):
- Automatically generated by Grafana, e.g.,
alertname
,grafana_folder
. - Example:
alertname="High CPU usage"
.
- Automatically generated by Grafana, e.g.,
User-configured labels:
- Added manually to the alert rule.
- Example:
severity
,priority
.
Query labels:
- Returned by the data source query.
- Example:
region
,service
,environment
.
Timing Options
- Group wait: Time before sending the first notification.
- Group interval: Time between notifications for a group.
- Repeat interval: Time before resending notifications for an unchanged group.
Alerts sharing the same label values are grouped together, and timing options determine notification frequency.
For more details, see:
A real-world example of alert grouping in action
Scenario: monitoring a distributed application
You’re monitoring metrics like CPU usage, memory utilization, and network latency across multiple regions. Alert rules include labels such as region: us-west
and region: us-east
. If multiple alerts trigger across these regions, they can result in notification floods.
How to manage grouping
To group alert rule notifications:
- Define labels: Use
region
,metric
, orinstance
labels to categorize alerts. - Configure Notification policies:
- Group alerts by the
region
label. - Example:
- Alerts for
region: us-west
go to the West Coast team. - Alerts for
region: us-east
go to the East Coast team.
- Alerts for
- Group alerts by the
Setting up alert rule grouping
Notification Policy
Notification policies group alert instances and route notifications to specific contact points.
To follow the above example, we will create notification policies that route alert instances based on the region
label to specific contact points. This setup ensures that alerts for a given region are consolidated into a single notification. Additionally, we will fine-tune the timing settings for each region by overriding the default parent policy, allowing more granular control over when notifications are sent.
Sign in to Grafana:
- Grafana Cloud users: Log in via Grafana Cloud.
- OSS users: Go to http://localhost:3000.
Navigate to Notification Policies:
- Go to Alerts & IRM > Alerting > Notification Policies.
Add a child policy:
In the Default policy, click + New child policy.
Label:
region
Operator:
=
Value:
us-west
This label matches alert rules where the region label is us-west.
Choose a Contact point:
- Select Webhook.
If you don’t have any contact points, add a Contact point.
Enable Continue matching:
- Turn on Continue matching subsequent sibling nodes so the evaluation continues even after one or more labels (i.e. region label) match.
Override grouping settings:
Toggle Override grouping.
Group by:
region
.Group by consolidates alerts that share the same grouping label into a single notification. For example, all alerts with
region=us-west
will be combined into one notification, making it easier to manage and reducing alert fatigue.
Set custom timing:
Toggle Override general timings.
Group interval:
2m
. This ensures follow-up notifications for the same alert group will be sent at intervals of 2 minutes. While the default is 5 minutes, we chose 2 minutes here to provide faster feedback for demonstration purposes.Timing options control how often notifications are sent and can help balance timely alerting with minimizing noise.
Save and repeat:
- Repeat for
region = us-east
with a different webhook or a different contact point.
These nested policies should route alert instances where the region label is either us-west or us-east.
Note
In Grafana, each label within a notification policy must have a unique key. If you attempt to add the same label key (e.g., region) with different values (us-west and us-east), only the last entry is saved, and the previous one is discarded. This is because labels are stored as associative arrays (maps), where each key must be unique. For identical label keys use regex matchers (e.g., region=~“us-west|us-east”).- Repeat for
Create an alert rule
In this section we configure an alert rule based on our application monitoring example.
- Navigate to Alerting > Alert rules.
- Click New alert rule.
Enter an alert rule name
Make it short and descriptive as this appears in your alert notification. For instance, High CPU usage - Multi-region
.
Define query and alert condition
In this section, we use the default options for Grafana-managed alert rule creation. The default options let us define the query, a expression (used to manipulate the data – the WHEN
field in the UI), and the condition that must be met for the alert to be triggered (in default mode is the threshold).
Grafana includes a test data source that creates simulated time series data. This data source is included in the demo environment for this tutorial. If you’re working in Grafana Cloud or your own local Grafana instance, you can add the data source through the Connections menu.
From the drop-down menu, select TestData data source.
From Scenario select CSV Content.
Copy in the following CSV data:
Select TestData as the data source.
Set Scenario to CSV Content.
Use the following CSV data:
region,cpu-usage,service,instance us-west,35,web-server-1,server-01 us-west,81,web-server-1,server-02 us-east,79,web-server-2,server-03 us-east,52,web-server-2,server-04 us-west,45,db-server-1,server-05 us-east,77,db-server-2,server-06 us-west,82,db-server-1,server-07 us-east,93,db-server-2,server-08
The returned data simulates a data source returning multiple time series, each leading to the creation of an alert instance for that specific time series.
In the Alert condition section:
- Keep
Last
as the value for the reducer function (WHEN
), and75
as the threshold value. This is the value above which the alert rule should trigger.
- Keep
Click Preview alert rule condition to run the queries.
It should return 5 series in Firing state, two firing instances from the us-west region, and three from the us-east region.
Set evaluation behavior
Every alert rule is assigned to an evaluation group. You can assign the alert rule to an existing evaluation group or create a new one.
In Folder, click + New folder and enter a name. For example:
Multi-region CPU alerts
. This folder contains our alert rules.In the Evaluation group, repeat the above step to create a new evaluation group. Name it
Multi-region CPU group
.Choose an Evaluation interval (how often the alert are evaluated). Choose
1m
.The evaluation interval of 1 minute allows Grafana to detect changes quickly, while the longer Group wait (from our notification policy) and Group interval (inherited from the Default notification policy) allow for efficient grouping of alerts and minimize unnecessary notifications.
Set the pending period to
0s
(zero seconds), so the alert rule fires the moment the condition is met (this minimizes the waiting time for the demonstration).
Configure labels and notifications
Choose the notification policy where you want to receive your alert notifications.
Select Use notification policy.
Click Preview routing to ensure correct matching.
The preview shows that the region label from our data source is successfully matching the notification policies that we created earlier thanks to the label matcher that we configured.
Click Save rule and exit.
Receiving grouped alert notifications
Now that the alert rule has been configured, you should receive alert notifications in the contact point whenever alerts trigger.
When the configured alert rule detects CPU usage higher than 75% across multiple regions, it will evaluate the metric every minute. If the condition persists, notifications will be grouped together, with a Group wait of 30 seconds before the first alert is sent. Follow-up notifications are sent every 2 minutes for quick updates in this demonstration, but for reducing alert frequency, consider using the default or increasing the interval. If the condition continues for an extended period, a Repeat interval of 4 hours ensures that the alert is only resent if the issue persists
As a result, our notification policy will route two notifications: one notification grouping the three alert instances from the us-east
region and another grouping the two alert instances from the us-west
region
Grouped notifications example:
Webhook - US East
{
"receiver": "webhook-us-east",
"status": "firing",
"alerts": [{ "instance": "server-03" }, { "instance": "server-06" }, { "instance": "server-08" }]
}
Webhook - US West
{
"receiver": "webhook-us-west",
"status": "firing",
"alerts": [{ "instance": "server-02" }, { "instance": "server-07" }]
}
Conclusion
Alert rule grouping simplifies incident management by consolidating related alerts. By configuring notification policies and using labels (such as region), you can group alerts based on specific criteria and route them to the appropriate teams. Fine-tuning timing options—including group wait, group interval, and repeat interval—further reduces noise and ensures notifications remain actionable without overwhelming on-call engineers.