<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Alerting on Grafana Labs</title><link>https://grafana.com/docs/grafana/v10.2/alerting/</link><description>Recent content in Alerting on Grafana Labs</description><generator>Hugo -- gohugo.io</generator><language>en</language><atom:link href="/docs/grafana/v10.2/alerting/index.xml" rel="self" type="application/rss+xml"/><item><title>Introduction to Alerting</title><link>https://grafana.com/docs/grafana/v10.2/alerting/fundamentals/</link><pubDate>Sun, 15 Mar 2026 12:15:09 +0000</pubDate><guid>https://grafana.com/docs/grafana/v10.2/alerting/fundamentals/</guid><content><![CDATA[&lt;h1 id=&#34;introduction-to-alerting&#34;&gt;Introduction to Alerting&lt;/h1&gt;
&lt;p&gt;Whether you’re just starting out or you&amp;rsquo;re a more experienced user of Grafana Alerting, learn more about the fundamentals and available features that help you create, manage, and respond to alerts; and improve your team’s ability to resolve issues quickly.&lt;/p&gt;
&lt;h2 id=&#34;principles&#34;&gt;Principles&lt;/h2&gt;
&lt;p&gt;In Prometheus-based alerting systems, you have an alert generator that creates alerts and an alert receiver that receives alerts. For example, Prometheus is an alert generator and is responsible for evaluating alert rules, while Alertmanager is an alert receiver and is responsible for grouping, inhibiting, silencing, and sending notifications about firing and resolved alerts.&lt;/p&gt;
&lt;p&gt;Grafana Alerting is built on the Prometheus model of designing alerting systems. It has an internal alert generator responsible for scheduling and evaluating alert rules, as well as an internal alert receiver responsible for grouping, inhibiting, silencing, and sending notifications. Grafana doesn’t use Prometheus as its alert generator because Grafana Alerting needs to work with many other data sources in addition to Prometheus. However, it does use Alertmanager as its alert receiver.&lt;/p&gt;
&lt;p&gt;Alerts are sent to the alert receiver where they are routed, grouped, inhibited, silenced and notified. In Grafana Alerting, the default alert receiver is the Alertmanager embedded inside Grafana, and is referred to as the Grafana Alertmanager. However, you can use other Alertmanagers too, and these are referred to as &lt;a href=&#34;/docs/grafana/v10.2/alerting/set-up/configure-alertmanager/&#34;&gt;External Alertmanagers&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The following diagram gives you an overview of Grafana Alerting and introduces you to some of the fundamental features that are the principles of how Grafana Alerting works.&lt;/p&gt;
&lt;figure
    class=&#34;figure-wrapper figure-wrapper__lightbox w-100p &#34;
    style=&#34;max-width: 750px;&#34;
    itemprop=&#34;associatedMedia&#34;
    itemscope=&#34;&#34;
    itemtype=&#34;http://schema.org/ImageObject&#34;
  &gt;&lt;a
        class=&#34;lightbox-link captioned&#34;
        href=&#34;/media/docs/alerting/how-alerting-works.png&#34;
        itemprop=&#34;contentUrl&#34;
      &gt;&lt;div class=&#34;img-wrapper w-100p h-auto&#34;&gt;&lt;img
          class=&#34;lazyload mb-0&#34;
          data-src=&#34;/media/docs/alerting/how-alerting-works.png&#34;data-srcset=&#34;/media/docs/alerting/how-alerting-works.png?w=320 320w, /media/docs/alerting/how-alerting-works.png?w=550 550w, /media/docs/alerting/how-alerting-works.png?w=750 750w, /media/docs/alerting/how-alerting-works.png?w=900 900w, /media/docs/alerting/how-alerting-works.png?w=1040 1040w, /media/docs/alerting/how-alerting-works.png?w=1240 1240w, /media/docs/alerting/how-alerting-works.png?w=1920 1920w&#34;data-sizes=&#34;auto&#34;alt=&#34;How Alerting works&#34;width=&#34;1340&#34;height=&#34;506&#34;title=&#34;How Alerting works&#34;/&gt;
        &lt;noscript&gt;
          &lt;img
            src=&#34;/media/docs/alerting/how-alerting-works.png&#34;
            alt=&#34;How Alerting works&#34;width=&#34;1340&#34;height=&#34;506&#34;title=&#34;How Alerting works&#34;/&gt;
        &lt;/noscript&gt;&lt;/div&gt;&lt;figcaption class=&#34;w-100p caption text-gray-13  &#34;&gt;How Alerting works&lt;/figcaption&gt;&lt;/a&gt;&lt;/figure&gt;
&lt;h2 id=&#34;fundamentals&#34;&gt;Fundamentals&lt;/h2&gt;
&lt;h3 id=&#34;alert-rules&#34;&gt;Alert rules&lt;/h3&gt;
&lt;p&gt;An alert rule is a set of criteria that determine when an alert should fire. It consists of one or more queries and expressions, a condition which needs to be met, an interval which determines how often the alert rule is evaluated, and a duration over which the condition must be met for an alert to fire.&lt;/p&gt;
&lt;p&gt;Alert rules are evaluated over their interval, and each alert rule can have zero, one, or any number of alerts firing at a time. The state of the alert rule is determined by its most &amp;ldquo;severe&amp;rdquo; alert, which can be one of Normal, Pending, or Firing. For example, if at least one of an alert rule&amp;rsquo;s alerts are firing then the alert rule is also firing. The health of an alert rule is determined by the status of its most recent evaluation. These can be OK, Error, and NoData.&lt;/p&gt;
&lt;p&gt;A very important feature of alert rules is that they support custom annotations and labels. These allow you to instrument alerts with additional metadata such as summaries and descriptions, and add additional labels to route alerts to specific notification policies.&lt;/p&gt;
&lt;h3 id=&#34;alerts&#34;&gt;Alerts&lt;/h3&gt;
&lt;p&gt;Alerts are uniquely identified by sets of key/value pairs called Labels. Each key is a label name and each value is a label value. For example, one alert might have the labels &lt;code&gt;foo=bar&lt;/code&gt; and another alert might have the labels &lt;code&gt;foo=baz&lt;/code&gt;. An alert can have many labels such as &lt;code&gt;foo=bar,bar=baz&lt;/code&gt; but it cannot have the same label twice such as &lt;code&gt;foo=bar,foo=baz&lt;/code&gt;. Two alerts cannot have the same labels either, and if two alerts have the same labels such as &lt;code&gt;foo=bar,bar=baz&lt;/code&gt; and &lt;code&gt;foo=bar,bar=baz&lt;/code&gt; then one of the alerts will be discarded. Alerts are resolved when the condition in the alert rule is no longer met, or the alert rule is deleted.&lt;/p&gt;
&lt;p&gt;In Grafana Managed Alerts, alerts can be in Normal, Pending, Alerting, No Data or Error states. In Datasource Managed Alerts, such as Mimir and Loki, alerts can be in Normal, Pending and Alerting, but not NoData or Error.&lt;/p&gt;
&lt;h3 id=&#34;contact-points&#34;&gt;Contact points&lt;/h3&gt;
&lt;p&gt;Contact points determine where notifications are sent. For example, you might have a contact point that sends notifications to an email address, to Slack, to an incident management system (IRM) such as Grafana OnCall or Pagerduty, or to a webhook.&lt;/p&gt;
&lt;p&gt;The notifications that are sent from contact points can be customized using notification templates. You can use notification templates to change the title, message, and structure of the notification. Notification templates are not specific to individual integrations or contact points.&lt;/p&gt;
&lt;h3 id=&#34;notification-policies&#34;&gt;Notification policies&lt;/h3&gt;
&lt;p&gt;Notification policies group alerts and then route them to contact points. They determine when notifications are sent, and how often notifications should be repeated.&lt;/p&gt;
&lt;p&gt;Alerts are matched to notification policies using label matchers. These are human-readable expressions that assert if the alert&amp;rsquo;s labels exactly match, do not exactly match, contain, or do not contain some expected text. For example, the matcher &lt;code&gt;foo=bar&lt;/code&gt; matches alerts with the label &lt;code&gt;foo=bar&lt;/code&gt; while the matcher &lt;code&gt;foo=~[a-zA-Z]&#43;&lt;/code&gt; matches alerts with any label called foo with a value that matches the regular expression &lt;code&gt;[a-zA-Z]&#43;&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;By default, an alert can only match one notification policy. However, with the &lt;code&gt;continue&lt;/code&gt; feature alerts can be made to match any number of notification policies at the same time. For more information on notification policies, see &lt;a href=&#34;/docs/grafana/v10.2/alerting/fundamentals/notification-policies/&#34;&gt;fundamentals of Notification Policies&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;silences-and-mute-timings&#34;&gt;Silences and mute timings&lt;/h3&gt;
&lt;p&gt;Silences and mute timings allow you to pause notifications for specific alerts or even entire notification policies. Use a silence to pause notifications on an ad-hoc basis, such as during a maintenance window; and use mute timings to pause notifications at regular intervals, such as evenings and weekends.&lt;/p&gt;
&lt;h2 id=&#34;provisioning&#34;&gt;Provisioning&lt;/h2&gt;
&lt;p&gt;You can create your alerting resources (alert rules, notification policies, and so on) in the Grafana UI; configmaps, files and configuration management systems using file-based provisioning; and in Terraform using API-based provisioning.&lt;/p&gt;
]]></content><description>&lt;h1 id="introduction-to-alerting">Introduction to Alerting&lt;/h1>
&lt;p>Whether you’re just starting out or you&amp;rsquo;re a more experienced user of Grafana Alerting, learn more about the fundamentals and available features that help you create, manage, and respond to alerts; and improve your team’s ability to resolve issues quickly.&lt;/p></description></item><item><title>Set up Alerting</title><link>https://grafana.com/docs/grafana/v10.2/alerting/set-up/</link><pubDate>Sun, 15 Mar 2026 12:15:09 +0000</pubDate><guid>https://grafana.com/docs/grafana/v10.2/alerting/set-up/</guid><content><![CDATA[&lt;h1 id=&#34;set-up-alerting&#34;&gt;Set up Alerting&lt;/h1&gt;
&lt;p&gt;Set up or upgrade your implementation of Grafana Alerting.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;These are set-up instructions for Grafana Alerting Open Source.&lt;/p&gt;
&lt;h2 id=&#34;before-you-begin&#34;&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Configure your &lt;a href=&#34;/docs/grafana/v10.2/administration/data-source-management/&#34;&gt;data sources&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Check which data sources are compatible with and supported by &lt;a href=&#34;/docs/grafana/v10.2/alerting/fundamentals/data-source-alerting/&#34;&gt;Grafana Alerting&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;set-up-alerting-1&#34;&gt;Set up Alerting&lt;/h2&gt;
&lt;p&gt;To set up Alerting, you need to:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Configure alert rules&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Create Grafana-managed or Mimir/Loki-managed alert rules and recording rules&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure contact points&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Check the default contact point and update the email address&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;[Optional] Add new contact points and integrations&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure notification policies&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Check the default notification policy&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;[Optional] Add additional nested policies&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;[Optional] Add labels and label matchers to control alert routing&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;[Optional] Integrate with &lt;a href=&#34;/docs/oncall/latest/integrations/grafana-alerting/&#34;&gt;Grafana OnCall&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;advanced-set-up-options&#34;&gt;Advanced set up options&lt;/h2&gt;
&lt;p&gt;Grafana Alerting supports many additional configuration options, from configuring external Alertmanagers to routing Grafana-managed alerts outside of Grafana, to defining your alerting setup as code.&lt;/p&gt;
&lt;p&gt;The following topics provide you with advanced configuration options for Grafana Alerting.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;/docs/grafana/v10.2/alerting/set-up/provision-alerting-resources/file-provisioning/&#34;&gt;Provision alert rules using file provisioning&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;/docs/grafana/v10.2/alerting/set-up/provision-alerting-resources/terraform-provisioning/&#34;&gt;Provision alert rules using Terraform&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;/docs/grafana/v10.2/alerting/set-up/configure-alertmanager/&#34;&gt;Add an external Alertmanager&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;/docs/grafana/v10.2/alerting/set-up/configure-high-availability/&#34;&gt;Configure high availability&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
]]></content><description>&lt;h1 id="set-up-alerting">Set up Alerting&lt;/h1>
&lt;p>Set up or upgrade your implementation of Grafana Alerting.&lt;/p>
&lt;p>&lt;strong>Note:&lt;/strong>&lt;/p>
&lt;p>These are set-up instructions for Grafana Alerting Open Source.&lt;/p>
&lt;h2 id="before-you-begin">Before you begin&lt;/h2>
&lt;ul>
&lt;li>Configure your &lt;a href="/docs/grafana/v10.2/administration/data-source-management/">data sources&lt;/a>&lt;/li>
&lt;li>Check which data sources are compatible with and supported by &lt;a href="/docs/grafana/v10.2/alerting/fundamentals/data-source-alerting/">Grafana Alerting&lt;/a>&lt;/li>
&lt;/ul>
&lt;h2 id="set-up-alerting-1">Set up Alerting&lt;/h2>
&lt;p>To set up Alerting, you need to:&lt;/p></description></item><item><title>Configure Alerting</title><link>https://grafana.com/docs/grafana/v10.2/alerting/alerting-rules/</link><pubDate>Sun, 15 Mar 2026 12:15:09 +0000</pubDate><guid>https://grafana.com/docs/grafana/v10.2/alerting/alerting-rules/</guid><content><![CDATA[&lt;h1 id=&#34;configure-alerting&#34;&gt;Configure Alerting&lt;/h1&gt;
&lt;p&gt;Configure the features and integrations that you need to create and manage your alerts.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Configure alert rules&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;/docs/grafana/v10.2/alerting/alerting-rules/create-grafana-managed-rule/&#34;&gt;Configure Grafana-managed alert rules&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;/docs/grafana/v10.2/alerting/alerting-rules/create-mimir-loki-managed-rule/&#34;&gt;Configure data source-managed alert rules&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Configure recording rules&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Recording rules are only available for compatible Prometheus or Loki data sources.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;For more information, see &lt;a href=&#34;/docs/grafana/v10.2/alerting/alerting-rules/create-mimir-loki-managed-recording-rule/&#34;&gt;Configure recording rules&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Configure contact points&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;For information on how to configure contact points, see &lt;a href=&#34;/docs/grafana/v10.2/alerting/alerting-rules/manage-contact-points/&#34;&gt;Configure contact points&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Configure notification policies&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;For information on how to configure notification policies, see &lt;a href=&#34;/docs/grafana/v10.2/alerting/alerting-rules/create-notification-policy/&#34;&gt;Configure notification policies&lt;/a&gt;.&lt;/p&gt;
]]></content><description>&lt;h1 id="configure-alerting">Configure Alerting&lt;/h1>
&lt;p>Configure the features and integrations that you need to create and manage your alerts.&lt;/p>
&lt;p>&lt;strong>Configure alert rules&lt;/strong>&lt;/p>
&lt;p>&lt;a href="/docs/grafana/v10.2/alerting/alerting-rules/create-grafana-managed-rule/">Configure Grafana-managed alert rules&lt;/a>.&lt;/p>
&lt;p>&lt;a href="/docs/grafana/v10.2/alerting/alerting-rules/create-mimir-loki-managed-rule/">Configure data source-managed alert rules&lt;/a>&lt;/p></description></item><item><title>Manage your alerts</title><link>https://grafana.com/docs/grafana/v10.2/alerting/manage-notifications/</link><pubDate>Sun, 15 Mar 2026 12:15:09 +0000</pubDate><guid>https://grafana.com/docs/grafana/v10.2/alerting/manage-notifications/</guid><content><![CDATA[&lt;h1 id=&#34;manage-your-alerts&#34;&gt;Manage your alerts&lt;/h1&gt;
&lt;p&gt;Once you have set up your alert rules, contact points, and notification policies, you can use Grafana Alerting to:&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;/docs/grafana/v10.2/alerting/manage-notifications/create-silence/&#34;&gt;Create silences&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;/docs/grafana/v10.2/alerting/manage-notifications/mute-timings/&#34;&gt;Create mute timings&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;/docs/grafana/v10.2/alerting/manage-notifications/declare-incident-from-alert/&#34;&gt;Declare incidents from firing alerts&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;/docs/grafana/v10.2/alerting/manage-notifications/view-state-health/&#34;&gt;View the state and health of alert rules&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;/docs/grafana/v10.2/alerting/manage-notifications/view-alert-rules/&#34;&gt;View and filter alert rules&lt;/a&gt;&lt;/p&gt;
]]></content><description>&lt;h1 id="manage-your-alerts">Manage your alerts&lt;/h1>
&lt;p>Once you have set up your alert rules, contact points, and notification policies, you can use Grafana Alerting to:&lt;/p>
&lt;p>&lt;a href="/docs/grafana/v10.2/alerting/manage-notifications/create-silence/">Create silences&lt;/a>&lt;/p>
&lt;p>&lt;a href="/docs/grafana/v10.2/alerting/manage-notifications/mute-timings/">Create mute timings&lt;/a>&lt;/p></description></item><item><title>Meta monitoring</title><link>https://grafana.com/docs/grafana/v10.2/alerting/monitor/</link><pubDate>Sun, 15 Mar 2026 12:15:09 +0000</pubDate><guid>https://grafana.com/docs/grafana/v10.2/alerting/monitor/</guid><content><![CDATA[&lt;h1 id=&#34;meta-monitoring&#34;&gt;Meta monitoring&lt;/h1&gt;
&lt;p&gt;Monitor your alerting metrics to ensure you identify potential issues before they become critical.&lt;/p&gt;
&lt;p&gt;Meta monitoring is the process of monitoring your monitoring system and alerting when your monitoring is not working as it should.&lt;/p&gt;
&lt;p&gt;In order to enable you to meta monitor, Grafana provides predefined metrics.&lt;/p&gt;
&lt;p&gt;Identify which metrics are critical to your monitoring system (i.e. Grafana) and then set up how you want to monitor them.&lt;/p&gt;
&lt;p&gt;You can use meta-monitoring metrics to understand the health of your alerting system in the following ways:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;[Optional] Create a dashboard in Grafana that uses this metric in a panel (just like you would for any other kind of metric).&lt;/li&gt;
&lt;li&gt;[Optional] Create an alert rule in Grafana that checks this metric regularly (just like you would do for any other kind of alert rule).&lt;/li&gt;
&lt;li&gt;[Optional] Use the Explore module in Grafana.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;metrics-for-grafana-managed-alerts&#34;&gt;Metrics for Grafana-managed alerts&lt;/h2&gt;
&lt;p&gt;To meta monitor Grafana-managed alerts, you need a Prometheus server, or other metrics database to collect and store metrics exported by Grafana.&lt;/p&gt;
&lt;p&gt;For example, if you are using Prometheus, add a &lt;code&gt;scrape_config&lt;/code&gt; to Prometheus to scrape metrics from Grafana, Alertmanager, or your data sources.&lt;/p&gt;
&lt;h3 id=&#34;example&#34;&gt;Example&lt;/h3&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;- job_name: grafana
  honor_timestamps: true
  scrape_interval: 15s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  follow_redirects: true
  static_configs:
    - targets:
        - grafana:3000&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h3 id=&#34;list-of-available-metrics&#34;&gt;List of available metrics&lt;/h3&gt;
&lt;p&gt;The Grafana ruler, which is responsible for evaluating alert rules, and the Grafana Alertmanager, which is responsible for sending notifications of firing and resolved alerts, provide a number of metrics that let you observe them.&lt;/p&gt;
&lt;h4 id=&#34;grafana_alerting_alerts&#34;&gt;grafana_alerting_alerts&lt;/h4&gt;
&lt;p&gt;This metric is a counter that shows you the number of &lt;code&gt;normal&lt;/code&gt;, &lt;code&gt;pending&lt;/code&gt;, &lt;code&gt;alerting&lt;/code&gt;, &lt;code&gt;nodata&lt;/code&gt; and &lt;code&gt;error&lt;/code&gt; alerts. For example, you might want to create an alert that fires when &lt;code&gt;grafana_alerting_alerts{state=&amp;quot;error&amp;quot;}&lt;/code&gt; is greater than 0.&lt;/p&gt;
&lt;h4 id=&#34;grafana_alerting_schedule_alert_rules&#34;&gt;grafana_alerting_schedule_alert_rules&lt;/h4&gt;
&lt;p&gt;This metric is a gauge that shows you the number of alert rules scheduled. An alert rule is scheduled unless it is paused, and the value of this metric should match the total number of non-paused alert rules in Grafana.&lt;/p&gt;
&lt;h4 id=&#34;grafana_alerting_schedule_periodic_duration_seconds_bucket&#34;&gt;grafana_alerting_schedule_periodic_duration_seconds_bucket&lt;/h4&gt;
&lt;p&gt;This metric is a histogram that shows you the time it takes to process an individual tick in the scheduler that evaluates alert rules. If the scheduler takes longer than 10 seconds to process a tick then pending evaluations will start to accumulate such that alert rules might later than expected.&lt;/p&gt;
&lt;h4 id=&#34;grafana_alerting_schedule_query_alert_rules_duration_seconds_bucket&#34;&gt;grafana_alerting_schedule_query_alert_rules_duration_seconds_bucket&lt;/h4&gt;
&lt;p&gt;This metric is a histogram that shows you how long it takes the scheduler to fetch the latest rules from the database. If this metric is elevated then so will &lt;code&gt;schedule_periodic_duration_seconds&lt;/code&gt;.&lt;/p&gt;
&lt;h4 id=&#34;grafana_alerting_scheduler_behind_seconds&#34;&gt;grafana_alerting_scheduler_behind_seconds&lt;/h4&gt;
&lt;p&gt;This metric is a gauge that shows you the number of seconds that the scheduler is behind where it should be. This number will increase if &lt;code&gt;schedule_periodic_duration_seconds&lt;/code&gt; is longer than 10 seconds, and decrease when it is less than 10 seconds. The smallest possible value of this metric is 0.&lt;/p&gt;
&lt;h4 id=&#34;grafana_alerting_notification_latency_seconds_bucket&#34;&gt;grafana_alerting_notification_latency_seconds_bucket&lt;/h4&gt;
&lt;p&gt;This metric is a histogram that shows you the number of seconds taken to send notifications for firing and resolved alerts. This metric will let you observe slow or over-utilized integrations, such as an SMTP server that is being given emails faster than it can send them.&lt;/p&gt;
&lt;h2 id=&#34;metrics-for-mimir-managed-alerts&#34;&gt;Metrics for Mimir-managed alerts&lt;/h2&gt;
&lt;p&gt;To meta monitor Grafana Mimir-managed alerts, open source and on-premise users need a Prometheus/Mimir server, or another metrics database to collect and store metrics exported by the Mimir ruler.&lt;/p&gt;
&lt;h4 id=&#34;rule_evaluation_failures_total&#34;&gt;rule_evaluation_failures_total&lt;/h4&gt;
&lt;p&gt;This metric is a counter that shows you the total number of rule evaluation failures.&lt;/p&gt;
&lt;h2 id=&#34;metrics-for-alertmanager&#34;&gt;Metrics for Alertmanager&lt;/h2&gt;
&lt;p&gt;To meta monitor the Alertmanager, you need a Prometheus/Mimir server, or another metrics database to collect and store metrics exported by Alertmanager.&lt;/p&gt;
&lt;p&gt;For example, if you are using Prometheus you should add a &lt;code&gt;scrape_config&lt;/code&gt; to Prometheus to scrape metrics from your Alertmanager.&lt;/p&gt;
&lt;h3 id=&#34;example-1&#34;&gt;Example&lt;/h3&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;- job_name: alertmanager
  honor_timestamps: true
  scrape_interval: 15s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  follow_redirects: true
  static_configs:
    - targets:
        - alertmanager:9093&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h3 id=&#34;list-of-available-metrics-1&#34;&gt;List of available metrics&lt;/h3&gt;
&lt;p&gt;The following is a list of available metrics for Alertmanager.&lt;/p&gt;
&lt;h4 id=&#34;alertmanager_alerts&#34;&gt;alertmanager_alerts&lt;/h4&gt;
&lt;p&gt;This metric is a counter that shows you the number of active, suppressed, and unprocessed alerts in Alertmanager. Suppressed alerts are silenced alerts, and unprocessed alerts are alerts that have been sent to the Alertmanager but have not been processed.&lt;/p&gt;
&lt;h4 id=&#34;alertmanager_alerts_invalid_total&#34;&gt;alertmanager_alerts_invalid_total&lt;/h4&gt;
&lt;p&gt;This metric is a counter that shows you the number of invalid alerts that were sent to Alertmanager. This counter should not exceed 0, and so in most cases you will want to create an alert that fires if whenever this metric increases.&lt;/p&gt;
&lt;h4 id=&#34;alertmanager_notifications_total&#34;&gt;alertmanager_notifications_total&lt;/h4&gt;
&lt;p&gt;This metric is a counter that shows you how many notifications have been sent by Alertmanager. The metric uses a label &amp;ldquo;integration&amp;rdquo; to show the number of notifications sent by integration, such as email.&lt;/p&gt;
&lt;h4 id=&#34;alertmanager_notifications_failed_total&#34;&gt;alertmanager_notifications_failed_total&lt;/h4&gt;
&lt;p&gt;This metric is a counter that shows you how many notifications have failed in total. This metric also uses a label &amp;ldquo;integration&amp;rdquo; to show the number of failed notifications by integration, such as failed emails. In most cases you will want to use the &lt;code&gt;rate&lt;/code&gt; function to understand how often notifications are failing to be sent.&lt;/p&gt;
&lt;h4 id=&#34;alertmanager_notification_latency_seconds_bucket&#34;&gt;alertmanager_notification_latency_seconds_bucket&lt;/h4&gt;
&lt;p&gt;This metric is a histogram that shows you the amount of time it takes Alertmanager to send notifications and for those notifications to be accepted by the receiving service. This metric uses a label &amp;ldquo;integration&amp;rdquo; to show the amount of time by integration. For example, you can use this metric to show the 95th percentile latency of sending emails.&lt;/p&gt;
&lt;h2 id=&#34;metrics-for-alertmanager-in-high-availability-mode&#34;&gt;Metrics for Alertmanager in high availability mode&lt;/h2&gt;
&lt;p&gt;If you are using Alertmanager in high availability mode there are a number of additional metrics that you might want to create alerts for.&lt;/p&gt;
&lt;h4 id=&#34;alertmanager_cluster_members&#34;&gt;alertmanager_cluster_members&lt;/h4&gt;
&lt;p&gt;This metric is a gauge that shows you the current number of members in the cluster. The value of this gauge should be the same across all Alertmanagers. If different Alertmanagers are showing different numbers of members then this is indicative of an issue with your Alertmanager cluster. You should look at the metrics and logs from your Alertmanagers to better understand what might be going wrong.&lt;/p&gt;
&lt;h4 id=&#34;alertmanager_cluster_failed_peers&#34;&gt;alertmanager_cluster_failed_peers&lt;/h4&gt;
&lt;p&gt;This metric is a gauge that shows you the current number of failed peers.&lt;/p&gt;
&lt;h4 id=&#34;alertmanager_cluster_health_score&#34;&gt;alertmanager_cluster_health_score&lt;/h4&gt;
&lt;p&gt;This metric is a gauge showing the health score of the Alertmanager. Lower values are better, and zero means the Alertmanager is healthy.&lt;/p&gt;
&lt;h4 id=&#34;alertmanager_cluster_peer_info&#34;&gt;alertmanager_cluster_peer_info&lt;/h4&gt;
&lt;p&gt;This metric is a gauge. It has a constant value &lt;code&gt;1&lt;/code&gt;, and contains a label called &amp;ldquo;peer&amp;rdquo; containing the Peer ID of each known peer.&lt;/p&gt;
&lt;h4 id=&#34;alertmanager_cluster_reconnections_failed_total&#34;&gt;alertmanager_cluster_reconnections_failed_total&lt;/h4&gt;
&lt;p&gt;This metric is a counter that shows you the number of failed peer connection attempts. In most cases you will want to use the &lt;code&gt;rate&lt;/code&gt; function to understand how often reconnections fail as this may be indicative of an issue or instability in your network.&lt;/p&gt;
]]></content><description>&lt;h1 id="meta-monitoring">Meta monitoring&lt;/h1>
&lt;p>Monitor your alerting metrics to ensure you identify potential issues before they become critical.&lt;/p>
&lt;p>Meta monitoring is the process of monitoring your monitoring system and alerting when your monitoring is not working as it should.&lt;/p></description></item></channel></rss>