Synthetic Monitoring alerting
Synthetic Monitoring integrates with Grafana Cloud alerting via Alertmanager to provide alerts. The synthetic monitoring plugin provides some default alerting rules. These rules evaluate metrics published by the probes into your cloud Prometheus instance. Firing alert rules can be routed to notification receivers configured in Grafana Cloud alerting.
The default alerting rules that we provide are:
- HighSensitivity: If 5% of probes fail for 5 minutes, then fire an alert [via the routing that you have set up]
- MedSensitivity: If 10% of probes fail for 5 minutes, then fire an alert [via the routing that you have set up]
- LowSensitivity: If 25% of probes fail for 5 minutes, then fire an alert [via the routing that you have set up]
How to create an alert for a Synthetic Monitoring check
Alerts can be created as part of creating or editing a check. You must be logged in to a Grafana Cloud instance in order to create or edit alerts. Alerting in Synthetic Monitoring happens in two phases: configuring a check to publish an alert sensitivity metric label value, and configuring alert rules.
To configure a check to publish the alert sensitivity metric label value:
Navigate to Observability > Synthetics > Checks.
Click New Check to create a new check, or edit a preexisting check in the list.
Click the Alerting section to show the alerting fields.
Select a sensitivity level to associate with the check and click Save.
This sensitivity value is published to the
alert_sensitivity
label on thesm_check_info
metric each time the check runs on a probe. That label value is utilized by the default alerts to scope which checks to fire alerts for.
To configure alert rules:
- Navigate to Observability > Synthetics > Alerts.
- If you have no default rules set up already for Synthetic Monitoring, click the Populate default alerts button.
- Some default rules will be generated for you. These rules represent sensitivity “buckets” based on probe success percentage. Checks that have been marked with a sensitivity level and whose success percentage drops below the threshold will cause the rule to fire. Checks that have a sensitivity level of “none” will not cause any of the default rules to fire.
How to edit an alert for a check
Alerts can be edited in Synthetic Monitoring on the alerts page, or in the Cloud Alerting UI
Note: It’s possible that substantially editing an alert rule in the Cloud Alerting UI will cause it to no longer be editable in the Synthetic Monitoring UI. In that case, the alert rule will only be editable from Grafana Cloud alerting. For example, if you edit the value “0.9” to be “0.75”, this change will propagate back to the synthetic monitoring alerts tab, and the alert will fire according to your edit. However, if you edit the value “0.9” to be “steve”, the alert will be invalid and no longer editable in the UI in the synthetic monitoring alerts tab.
How to set up routing for default alerts
Default alerts contain only the alert rules. Without setting up routing, these alerts will not be routed to any notification receiver, so won’t notify anybody when they fire. You must set up routing in Alertmanager within Grafana Cloud alerting.
Feel free to write your own configuration for routing in the text box editor.
You may set up routing to places such as email addresses, Slack, PagerDuty, OpsGenie, and so on.
Step by step instructions can be found in this blog post.
In order to route the default synthetic monitoring alerts to a notification receiver, you can set up the conditions to match on the namespace
and alert_sensitivity
labels.
route:
receiver: <your notification receiver>
match:
namespace: synthetic_monitoring
alert_sensitivity: high
If you do not already have an SMTP server available for sending email alerts, see Grafana Alerting for information about how to use one supplied by Grafana Labs.
Where to access Synthetic Monitoring alerts from Grafana Cloud alerting
Alert rules can be found in the synthetic_monitoring
namespace of Grafana Cloud alerting.
Default rules will be created inside the default
rule group.
Recommendation to avoid alert-flapping
When enabling alerting for a check, we recommend running that check from multiple locations, preferably three or more. That way, if there’s a problem either with a single probe or with the network connectivity from that single location, you won’t be needlessly alerted, as the other locations running the same check will continue to report their results alongside the problematic location.
Grafana Alerting
See Grafana Alerting docs for details.
Next steps
Checkout Top 5 user-requested synthetic monitoring alerts in Grafana Cloud, and Best practices for alerting on Synthetic Monitoring metrics in Grafana Cloud blogposts for learn more.