This is documentation for the next version of Grafana. For the latest stable release, go to the latest version.
If alerts are not behaving as you expect, here are some steps you can take to troubleshoot and figure out what is going wrong.
The first level of troubleshooting you can do is click Test Rule. You will get result back that you can expand to the point where you can see the raw data that was returned from your query.
Further troubleshooting can also be done by inspecting the grafana-server log. If it’s not an error or for some reason the log does not say anything you can enable debug logging for some relevant components. This is done in Grafana’s ini config file.
Example showing loggers that could be relevant when troubleshooting alerting.
[log] filters = alerting.scheduler:debug \ alerting.engine:debug \ alerting.resultHandler:debug \ alerting.evalHandler:debug \ alerting.evalContext:debug \ alerting.extractor:debug \ alerting.notifier:debug \ alerting.notifier.slack:debug \ alerting.notifier.pagerduty:debug \ alerting.notifier.email:debug \ alerting.notifier.webhook:debug \ tsdb.graphite:debug \ tsdb.prometheus:debug \ tsdb.opentsdb:debug \ tsdb.influxdb:debug \ tsdb.elasticsearch:debug \ tsdb.elasticsearch.client:debug \
If you want to log raw query sent to your TSDB and raw response in log you also have to set grafana.ini option
Related Grafana resources
Unify your data with Grafana plugins: Splunk, MongoDB, Datadog, and more
Show how Grafana can be used to take data from multiple different sources and unify it, without disrupting the investments that are working today.
Demo: Getting started with Grafana Enterprise and observability
Join the Grafana Labs team for a 30-minute demo of how to get started with the Grafana Stack, so you can go from zero to observability in just a few minutes.