Alert groups insights and metrics
Grafana IRM provides detailed metrics and logs to help you monitor your alert groups handling performance and analyze trends. These insights enable you to identify bottlenecks, measure response effectiveness, and continuously improve your alerting processes.
About alert groups metrics
Alert groups metrics in Grafana IRM track key performance indicators related to alert groups handling, including:
- Alert groups volume across integrations
- Response times for alert groups acknowledgment
- Notification patterns
- Team and user metrics
These metrics are exposed in Prometheus format, making them easy to query and visualize in Grafana dashboards.
Available metrics
Grafana IRM provides the following core metrics:
Access metrics
For Grafana Cloud customers
Alert groups metrics are automatically collected in the preinstalled grafanacloud-usage data source and have the prefix grafanacloud_oncall_instance, for example:
grafanacloud_oncall_instance_alert_groups_totalgrafanacloud_oncall_instance_alert_groups_response_time_seconds_bucketgrafanacloud_oncall_instance_alert_groups_resolution_time_seconds_bucketgrafanacloud_oncall_instance_user_was_notified_of_alert_groups_total
Metric details and examples
Alert groups total
This metric tracks the count of alert groups in different states with the following labels:
Example query:
Get the number of alert groups in “firing” state for “Grafana Alerting” integration:
grafanacloud_oncall_instance_alert_groups_total{integration="Grafana Alerting", state="firing"}Alert groups response time
This metric tracks response times with the following labels:
Example query:
Get the number of alert groups with response time less than 10 minutes (600 seconds):
grafanacloud_oncall_instance_alert_groups_response_time_seconds_bucket{integration="Grafana Alerting", le="600"}Alert groups resolution time
This metric tracks resolution times with the following labels:
Example query:
Get the number of alert groups with resolution time less than 10 minutes (600 seconds):
grafanacloud_oncall_instance_alert_groups_resolution_time_seconds_bucket{integration="Grafana Alerting", le="600"}User notification metrics
This metric tracks how many alert groups each user was notified about:
Example query:
Get the number of alert groups a specific user was notified of:
grafanacloud_oncall_instance_user_was_notified_of_alert_groups_total{username="alex"}Alert groups metrics dashboard
A pre-built “Alert Groups Insights” dashboard is available to visualize key alert metrics. To access it:
- Navigate to your dashboards list in the folder
General - Find the dashboard with the tag
irm - Select your Prometheus data source (for Cloud customers, use
grafanacloud_usage) - Filter data by Grafana instances, teams, and integrations
To re-import the dashboard:
- Go to
Administration>Plugins - Find IRM in the plugins list
- Open the
Dashboardstab - Click “Re-import” next to “Alert Groups Insights”
Note
Re-importing or updating the plugin will reset any customizations. To preserve changes, save a copy of the dashboard using “Save As” in dashboard settings.
You can also view insights directly in Grafana IRM by clicking Insights in the navigation menu.
Alert groups insight logs
Alert groups insight logs provide an audit trail of configuration changes and system events in your IRM environment. These logs are automatically configured in Grafana Cloud with the Usage Insights Loki data source.
Access insight logs
To retrieve all logs related to your IRM instance:
{instance_type="oncall"} | logfmt | __error__=``Types of insight logs
IRM captures three primary types of insight logs:
Resource logs
Track changes to resources (integrations, escalation chains, schedules, etc.):
{instance_type="oncall"} | logfmt | __error__=`` | action_type = `resource`Resource logs include the following key fields:
Maintenance logs
Track when maintenance mode is started or finished:
{instance_type="oncall"} | logfmt | __error__=`` | action_type = `maintenance`Maintenance logs include:
ChatOps logs
Track configuration changes to chat integrations:
{instance_type="oncall"} | logfmt | __error__=`` | action_type = `chat_ops`ChatOps logs include:
Example log queries
Here are some practical log queries to analyze your alert handling configuration:
Actions by specific user:
{instance_type="oncall"} | logfmt | __error__=`` | action_type = `resource` and author="username"Changes to schedules:
{instance_type="oncall"} | logfmt | __error__=`` | action_type = `resource` and (resource_type=`web_schedule` or resource_type=`calendar_schedule` or resource_type=`ical_schedule`)Changes to escalation policies:
{instance_type="oncall"} | logfmt | __error__=`` | action_type = `resource` and resource_type=`escalation_policy` and escalation_chain_id=`CHAIN_ID`Maintenance events for an integration:
{instance_type="oncall"} | logfmt | __error__=`` | action_type = `maintenance` and resource_id=`INTEGRATION_ID`Slack chatops configuration changes:
{instance_type="oncall"} | logfmt | __error__=`` | action_type = `chat_ops` and chat_ops_type=`slack`


