<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Alerting on Grafana Labs</title><link>https://grafana.com/docs/grafana/v9.0/alerting/</link><description>Recent content in Alerting on Grafana Labs</description><generator>Hugo -- gohugo.io</generator><language>en</language><atom:link href="/docs/grafana/v9.0/alerting/index.xml" rel="self" type="application/rss+xml"/><item><title>Upgrade to Grafana Alerting</title><link>https://grafana.com/docs/grafana/v9.0/alerting/migrating-alerts/</link><pubDate>Sun, 12 Apr 2026 12:30:02 +0000</pubDate><guid>https://grafana.com/docs/grafana/v9.0/alerting/migrating-alerts/</guid><content><![CDATA[&lt;h1 id=&#34;upgrade-to-grafana-alerting&#34;&gt;Upgrade to Grafana Alerting&lt;/h1&gt;
&lt;p&gt;Grafana Alerting is enabled by default for new installations or existing installations whether or not legacy alerting is configured.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: When upgrading, your dashboard alerts are migrated to a new format. This migration can be rolled back easily by &lt;a href=&#34;opt-out/&#34;&gt;opting out&lt;/a&gt;. If you have any questions regarding this migration, please contact us.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Existing installations that do not use legacy alerting will have Grafana Alerting enabled by default unless alerting is disabled in the configuration.&lt;/p&gt;
&lt;p&gt;Likewise, existing installations that use legacy alerting will be automatically upgraded to Grafana Alerting unless you have &lt;a href=&#34;opt-out/&#34;&gt;opted out&lt;/a&gt; of Grafana Alerting before migration takes place. During the upgrade, legacy alerts are migrated to the new alerts type and no alerts or alerting data are lost.&lt;/p&gt;
&lt;p&gt;Once the upgrade has taken place, you still have the option to &lt;a href=&#34;roll-back/&#34;&gt;roll back&lt;/a&gt; to legacy alerting. However, we do not recommend choosing this option. If you do choose to roll back, Grafana will restore your alerts to the alerts you had at the point in time when the upgrade took place. All new alerts and changes made exclusively in Grafana Alerting will be deleted.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Cloud customers, who do not want to upgrade to Grafana Alerting, should contact customer support.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;If you have opted out or rolled back, you can always choose to &lt;a href=&#34;opt-in/&#34;&gt;opt in&lt;/a&gt; to Grafana Alerting at a later point in time.&lt;/p&gt;
&lt;p&gt;The following table provides details on the upgrade for Cloud, Enterprise, and OSS installations and the new Grafana Alerting UI.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Grafana instance upgraded to 9.0&lt;/th&gt;
              &lt;th&gt;&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;Cloud&lt;/td&gt;
              &lt;td&gt;Existing Cloud installations with legacy dashboard alerting will have two alerting icons in the left navigation panel - the old alerting plugin icon and the new Grafana Alerting icon. During upgrade, existing alerts from the Cloud alerting plugin are migrated to Grafana Alerting. Once migration is complete, you can access and manage the older alerts from the new alerting Grafana Alerting icon in the navigation panel. The (older) Cloud alerting plugin is uninstalled from your cloud instance. Contact customer support if you &lt;strong&gt;do not wish&lt;/strong&gt; to migrate to Grafana Alerting for your Cloud stack. If you choose to use legacy alerting, use the You will see the new Grafana Alerting icon as well as the old Cloud alerting plugin in the left navigation panel.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Enterprise&lt;/td&gt;
              &lt;td&gt;Existing Enterprise instances using legacy alerting will have both the old (marked as legacy) and the new alerting icons in the navigation panel. During upgrade, existing legacy alerts are migrated to Grafana Alerting. If you wish, you can &lt;a href=&#34;opt-out/&#34;&gt;opt-out&lt;/a&gt; of Grafana Alerting and roll back to legacy alerting. In that case, you can manage your legacy alerts from the alerting icon marked as legacy.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;OSS&lt;/td&gt;
              &lt;td&gt;Existing OSS installations with legacy dashboard alerting will have two alerting icons in the left navigation panel - the old alerting icon (marked as legacy) and the new Grafana Alerting icon. During upgrade, existing legacy alerts are migrated to Grafana Alerting. If you wish, you can &lt;a href=&#34;opt-out/&#34;&gt;opt-out&lt;/a&gt; of Grafana Alerting and roll back to legacy alerting. In that case, you can manage your legacy alerts from the alerting icon marked as legacy.&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Legacy alerting will be deprecated in a future release (v10).&lt;/p&gt;&lt;/blockquote&gt;
]]></content><description>&lt;h1 id="upgrade-to-grafana-alerting">Upgrade to Grafana Alerting&lt;/h1>
&lt;p>Grafana Alerting is enabled by default for new installations or existing installations whether or not legacy alerting is configured.&lt;/p>
&lt;blockquote>
&lt;p>&lt;strong>Note&lt;/strong>: When upgrading, your dashboard alerts are migrated to a new format. This migration can be rolled back easily by &lt;a href="opt-out/">opting out&lt;/a>. If you have any questions regarding this migration, please contact us.&lt;/p></description></item><item><title>Alerting fundamentals</title><link>https://grafana.com/docs/grafana/v9.0/alerting/fundamentals/</link><pubDate>Sun, 12 Apr 2026 12:30:02 +0000</pubDate><guid>https://grafana.com/docs/grafana/v9.0/alerting/fundamentals/</guid><content><![CDATA[&lt;h1 id=&#34;alerting-fundamentals&#34;&gt;Alerting fundamentals&lt;/h1&gt;
&lt;p&gt;This section includes the following fundamental concepts of Grafana Alerting:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;annotation-label/&#34;&gt;Annotations and labels for alerting rules&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;alertmanager/&#34;&gt;Alertmanager&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;state-and-health/&#34;&gt;State and health of alerting rules&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;evaluate-grafana-alerts/&#34;&gt;Evaluating Grafana managed alerts&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
]]></content><description>&lt;h1 id="alerting-fundamentals">Alerting fundamentals&lt;/h1>
&lt;p>This section includes the following fundamental concepts of Grafana Alerting:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="annotation-label/">Annotations and labels for alerting rules&lt;/a>&lt;/li>
&lt;li>&lt;a href="alertmanager/">Alertmanager&lt;/a>&lt;/li>
&lt;li>&lt;a href="state-and-health/">State and health of alerting rules&lt;/a>&lt;/li>
&lt;li>&lt;a href="evaluate-grafana-alerts/">Evaluating Grafana managed alerts&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>Create and manage rules</title><link>https://grafana.com/docs/grafana/v9.0/alerting/alerting-rules/</link><pubDate>Sun, 12 Apr 2026 12:30:02 +0000</pubDate><guid>https://grafana.com/docs/grafana/v9.0/alerting/alerting-rules/</guid><content><![CDATA[&lt;h1 id=&#34;create-and-manage-grafana-alerting-rules&#34;&gt;Create and manage Grafana Alerting rules&lt;/h1&gt;
&lt;p&gt;An alerting rule is a set of evaluation criteria that determines whether an alert will fire. The rule consists of one or more queries and expressions, a condition, the frequency of evaluation, and optionally, the duration over which the condition is met.&lt;/p&gt;
&lt;p&gt;While queries and expressions select the data set to evaluate, a condition sets the threshold that an alert must meet or exceed to create an alert. An interval specifies how frequently an alerting rule is evaluated. Duration, when configured, indicates how long a condition must be met. The rules can also define alerting behavior in the absence of data.&lt;/p&gt;
&lt;p&gt;You can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;create-mimir-loki-managed-rule/&#34;&gt;Create Grafana Mimir or Loki managed alert rule&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;create-mimir-loki-managed-recording-rule/&#34;&gt;Create Grafana Mimir or Loki managed recording rule&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;edit-mimir-loki-namespace-group/&#34;&gt;Edit Grafana Mimir or Loki rule groups and namespaces&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;create-grafana-managed-rule/&#34;&gt;Create Grafana managed alert rule&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;../fundamentals/state-and-health/&#34;&gt;State and health of alerting rules&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;rule-list/&#34;&gt;Manage alerting rules&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
]]></content><description>&lt;h1 id="create-and-manage-grafana-alerting-rules">Create and manage Grafana Alerting rules&lt;/h1>
&lt;p>An alerting rule is a set of evaluation criteria that determines whether an alert will fire. The rule consists of one or more queries and expressions, a condition, the frequency of evaluation, and optionally, the duration over which the condition is met.&lt;/p></description></item><item><title>Contact points</title><link>https://grafana.com/docs/grafana/v9.0/alerting/contact-points/</link><pubDate>Sun, 12 Apr 2026 12:30:02 +0000</pubDate><guid>https://grafana.com/docs/grafana/v9.0/alerting/contact-points/</guid><content><![CDATA[&lt;h1 id=&#34;contact-points&#34;&gt;Contact points&lt;/h1&gt;
&lt;p&gt;Use contact points to define how your contacts are notified when an alert fires. A contact point can have one or more contact point types, for example, email, slack, webhook, and so on. When an alert fires, a notification is sent to all contact point types listed for a contact point. Optionally, use &lt;a href=&#34;message-templating/&#34;&gt;message templates&lt;/a&gt; to customize notification messages for the contact point types.&lt;/p&gt;
&lt;p&gt;You can configure Grafana managed contact points as well as contact points for an &lt;a href=&#34;../../datasources/alertmanager/&#34;&gt;external Alertmanager data source&lt;/a&gt;. For more information, see &lt;a href=&#34;../fundamentals/alertmanager/&#34;&gt;Alertmanager&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Before you begin, see &lt;a href=&#34;../&#34;&gt;Grafana Alerting&lt;/a&gt; which explains the various components of Grafana Alerting. We also recommend that you familiarize yourself with some of the &lt;a href=&#34;../fundamentals/&#34;&gt;fundamental concepts&lt;/a&gt; of Grafana Alerting.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;create-contact-point/&#34;&gt;Create contact point&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;edit-contact-point/&#34;&gt;Edit contact point&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;test-contact-point/&#34;&gt;Test contact point&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;delete-contact-point/&#34;&gt;Delete contact point&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;notifiers/&#34;&gt;List of notifiers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;message-templating/&#34;&gt;Message templating&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
]]></content><description>&lt;h1 id="contact-points">Contact points&lt;/h1>
&lt;p>Use contact points to define how your contacts are notified when an alert fires. A contact point can have one or more contact point types, for example, email, slack, webhook, and so on. When an alert fires, a notification is sent to all contact point types listed for a contact point. Optionally, use &lt;a href="message-templating/">message templates&lt;/a> to customize notification messages for the contact point types.&lt;/p></description></item><item><title>Notification policies</title><link>https://grafana.com/docs/grafana/v9.0/alerting/notifications/</link><pubDate>Sun, 12 Apr 2026 12:30:02 +0000</pubDate><guid>https://grafana.com/docs/grafana/v9.0/alerting/notifications/</guid><content><![CDATA[&lt;h1 id=&#34;notification-policies&#34;&gt;Notification policies&lt;/h1&gt;
&lt;p&gt;Notification policies determine how alerts are routed to contact points. Policies have a tree structure, where each policy can have one or more child policies. Each policy, except for the root policy, can also match specific alert labels. Each alert is evaluated by the root policy and subsequently by each child policy. If you enable the &lt;code&gt;Continue matching subsequent sibling nodes&lt;/code&gt; option is enabled for a specific policy, then evaluation continues even after one or more matches. A parent policy’s configuration settings and contact point information govern the behavior of an alert that does not match any of the child policies. A root policy governs any alert that does not match a specific policy.&lt;/p&gt;
&lt;p&gt;You can configure Grafana managed notification policies as well as notification policies for an &lt;a href=&#34;../../datasources/alertmanager/&#34;&gt;external Alertmanager data source&lt;/a&gt;. For more information, see &lt;a href=&#34;../fundamentals/alertmanager/&#34;&gt;Alertmanager&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;grouping&#34;&gt;Grouping&lt;/h2&gt;
&lt;figure
    class=&#34;figure-wrapper figure-wrapper__lightbox w-100p &#34;
    style=&#34;max-width: 650px;&#34;
    itemprop=&#34;associatedMedia&#34;
    itemscope=&#34;&#34;
    itemtype=&#34;http://schema.org/ImageObject&#34;
  &gt;&lt;a
        class=&#34;lightbox-link captioned&#34;
        href=&#34;/static/img/docs/alerting/unified/notification-policies-grouping.png&#34;
        itemprop=&#34;contentUrl&#34;
      &gt;&lt;div class=&#34;img-wrapper w-100p h-auto&#34;&gt;&lt;img
          class=&#34;lazyload mb-0&#34;
          data-src=&#34;/static/img/docs/alerting/unified/notification-policies-grouping.png&#34;data-srcset=&#34;/static/img/docs/alerting/unified/notification-policies-grouping.png?w=320 320w, /static/img/docs/alerting/unified/notification-policies-grouping.png?w=550 550w, /static/img/docs/alerting/unified/notification-policies-grouping.png?w=750 750w, /static/img/docs/alerting/unified/notification-policies-grouping.png?w=900 900w, /static/img/docs/alerting/unified/notification-policies-grouping.png?w=1040 1040w, /static/img/docs/alerting/unified/notification-policies-grouping.png?w=1240 1240w, /static/img/docs/alerting/unified/notification-policies-grouping.png?w=1920 1920w&#34;data-sizes=&#34;auto&#34;alt=&#34;Notification policies grouping&#34;width=&#34;1668&#34;height=&#34;984&#34;title=&#34;Notification policies grouping&#34;/&gt;
        &lt;noscript&gt;
          &lt;img
            src=&#34;/static/img/docs/alerting/unified/notification-policies-grouping.png&#34;
            alt=&#34;Notification policies grouping&#34;width=&#34;1668&#34;height=&#34;984&#34;title=&#34;Notification policies grouping&#34;/&gt;
        &lt;/noscript&gt;&lt;/div&gt;&lt;figcaption class=&#34;w-100p caption text-gray-13  &#34;&gt;Notification policies grouping&lt;/figcaption&gt;&lt;/a&gt;&lt;/figure&gt;
&lt;p&gt;Grouping is a new and key concept of Grafana Alerting that categorizes alert notifications of similar nature into a single funnel. This allows you to properly route alert notifications during larger outages when many parts of a system fail at once causing a high number of alerts to fire simultaneously.&lt;/p&gt;
&lt;p&gt;For example, suppose you have 100 services connected to a database in different environments. These services are differentiated by the label &lt;code&gt;env=environmentname&lt;/code&gt;. An alert rule is in place to monitor whether your services can reach the database named &lt;code&gt;alertname=DatabaseUnreachable&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;When a network partition occurs, half of your services can no longer reach the database. As a result, 50 different alerts (assuming half of your services) are fired. For this situation, you want to receive a single-page notification (as opposed to 50) with a list of the environments that are affected.&lt;/p&gt;
&lt;p&gt;You can configure grouping to be &lt;code&gt;group_by: [alertname]&lt;/code&gt; (take note that the &lt;code&gt;env&lt;/code&gt; label is omitted). With this configuration in place, Grafana sends a single compact notification that has all the affected environments for this alert rule.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Grafana also has a special label named &lt;code&gt;...&lt;/code&gt; that you can use to group all alerts by all labels (effectively disabling grouping), therefore each alert will go into its own group. It is different from the default of &lt;code&gt;group_by: null&lt;/code&gt; where &lt;strong&gt;all&lt;/strong&gt; alerts go into a single group.&lt;/p&gt;&lt;/blockquote&gt;
&lt;h2 id=&#34;edit-root-notification-policy&#34;&gt;Edit root notification policy&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Before Grafana v8.2, the configuration of the embedded Alertmanager was shared across organisations. Users of Grafana 8.0 and 8.1 are advised to use the new Grafana 8 Alerts only if they have one organisation. Otherwise, silences for the Grafana managed alerts will be visible by all organizations.&lt;/p&gt;&lt;/blockquote&gt;
&lt;ol&gt;
&lt;li&gt;In the Grafana menu, click the &lt;strong&gt;Alerting&lt;/strong&gt; (bell) icon to open the Alerting page listing existing alerts.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Notification policies&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;From the &lt;strong&gt;Alertmanager&lt;/strong&gt; dropdown, select an external Alertmanager. By default, the Grafana Alertmanager is selected.&lt;/li&gt;
&lt;li&gt;In the Root policy section, click &lt;strong&gt;Edit&lt;/strong&gt; (pen icon).&lt;/li&gt;
&lt;li&gt;In &lt;strong&gt;Default contact point&lt;/strong&gt;, update the &lt;a href=&#34;../contact-points/&#34;&gt;contact point&lt;/a&gt; to whom notifications should be sent for rules when alert rules do not match any specific policy.&lt;/li&gt;
&lt;li&gt;In &lt;strong&gt;Group by&lt;/strong&gt;, choose labels to group alerts by. If multiple alerts are matched for this policy, then they are grouped by these labels. A notification is sent per group. If the field is empty (default), then all notifications are sent in a single group. Use a special label &lt;code&gt;...&lt;/code&gt; to group alerts by all labels (which effectively disables grouping).&lt;/li&gt;
&lt;li&gt;In &lt;strong&gt;Timing options&lt;/strong&gt;, select from the following options:
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Group wait&lt;/strong&gt; Time to wait to buffer alerts of the same group before sending an initial notification. Default is 30 seconds.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Group interval&lt;/strong&gt; Minimum time interval between two notifications for a group. Default is 5 minutes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Repeat interval&lt;/strong&gt; Minimum time interval for re-sending a notification if no new alerts were added to the group. Default is 4 hours.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Save&lt;/strong&gt; to save your changes.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;add-new-specific-policy&#34;&gt;Add new specific policy&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;In the Grafana menu, click the &lt;strong&gt;Alerting&lt;/strong&gt; (bell) icon to open the Alerting page listing existing alerts.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Notification policies&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;From the &lt;strong&gt;Alertmanager&lt;/strong&gt; dropdown, select an Alertmanager. By default, the Grafana Alertmanager is selected.&lt;/li&gt;
&lt;li&gt;To add a top level specific policy, go to the &lt;strong&gt;Specific routing&lt;/strong&gt; section and click &lt;strong&gt;New specific policy&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;In &lt;strong&gt;Matching labels&lt;/strong&gt; section, add one or more rules for matching alert labels. For more information, see &lt;a href=&#34;../fundamentals/annotation-label/labels-and-label-matchers/&#34;&gt;&amp;ldquo;Labels and label matchers&amp;rdquo;&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;In &lt;strong&gt;Contact point&lt;/strong&gt;, add the &lt;a href=&#34;../contact-points/&#34;&gt;contact point&lt;/a&gt; to send notification to if alert matches only this specific policy and not any of the nested policies.&lt;/li&gt;
&lt;li&gt;Optionally, enable &lt;strong&gt;Continue matching subsequent sibling nodes&lt;/strong&gt; to continue matching nested policies even after the alert matched the parent policy. When this option is enabled, you can get more than one notification. Use it to send notification to a catch-all contact point as well as to one of more specific contact points handled by nested policies.&lt;/li&gt;
&lt;li&gt;Optionally, enable &lt;strong&gt;Override grouping&lt;/strong&gt; to specify the same grouping as the root policy. If this option is not enabled, the root policy grouping is used.&lt;/li&gt;
&lt;li&gt;Optionally, enable &lt;strong&gt;Override general timings&lt;/strong&gt; to override the timing options configured in the group notification policy.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Save policy&lt;/strong&gt; to save your changes.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;add-nested-policy&#34;&gt;Add nested policy&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Expand the specific policy you want to update.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Add nested policy&lt;/strong&gt;, then add the details using information in &lt;a href=&#34;#add-new-specific-policy&#34;&gt;Add new specific policy&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Save policy&lt;/strong&gt; to save your changes.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;edit-specific-policy&#34;&gt;Edit specific policy&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;In the Alerting page, click &lt;strong&gt;Notification policies&lt;/strong&gt; to open the page listing existing policies.&lt;/li&gt;
&lt;li&gt;Find the policy you want to edit, then click &lt;strong&gt;Edit&lt;/strong&gt; (pen icon).&lt;/li&gt;
&lt;li&gt;Make any changes using instructions in &lt;a href=&#34;#add-new-specific-policy&#34;&gt;Add new specific policy&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Save policy&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;example&#34;&gt;Example&lt;/h2&gt;
&lt;p&gt;An example of an alert configuration.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Create a &amp;ldquo;default&amp;rdquo; contact point for slack notifications, and set it on root policy.&lt;/li&gt;
&lt;li&gt;Edit the root policy grouping to group alerts by &lt;code&gt;cluster&lt;/code&gt;, &lt;code&gt;namespace&lt;/code&gt; and &lt;code&gt;severity&lt;/code&gt; so that you get a notification per alert rule and specific kubernetes cluster and namespace.&lt;/li&gt;
&lt;li&gt;Create specific route for alerts coming from the development cluster with an appropriate contact point.&lt;/li&gt;
&lt;li&gt;Create a specific route for alerts with &amp;ldquo;critical&amp;rdquo; severity with a more invasive contact point type, like pager duty notification.&lt;/li&gt;
&lt;li&gt;Create specific routes for particular teams that handle their own onduty rotations.&lt;/li&gt;
&lt;/ul&gt;
&lt;figure
    class=&#34;figure-wrapper figure-wrapper__lightbox w-100p &#34;
    style=&#34;max-width: 650px;&#34;
    itemprop=&#34;associatedMedia&#34;
    itemscope=&#34;&#34;
    itemtype=&#34;http://schema.org/ImageObject&#34;
  &gt;&lt;a
        class=&#34;lightbox-link captioned&#34;
        href=&#34;/static/img/docs/alerting/unified/notification-policies-8-0.png&#34;
        itemprop=&#34;contentUrl&#34;
      &gt;&lt;div class=&#34;img-wrapper w-100p h-auto&#34;&gt;&lt;img
          class=&#34;lazyload mb-0&#34;
          data-src=&#34;/static/img/docs/alerting/unified/notification-policies-8-0.png&#34;data-srcset=&#34;/static/img/docs/alerting/unified/notification-policies-8-0.png?w=320 320w, /static/img/docs/alerting/unified/notification-policies-8-0.png?w=550 550w, /static/img/docs/alerting/unified/notification-policies-8-0.png?w=750 750w, /static/img/docs/alerting/unified/notification-policies-8-0.png?w=900 900w, /static/img/docs/alerting/unified/notification-policies-8-0.png?w=1040 1040w, /static/img/docs/alerting/unified/notification-policies-8-0.png?w=1240 1240w, /static/img/docs/alerting/unified/notification-policies-8-0.png?w=1920 1920w&#34;data-sizes=&#34;auto&#34;alt=&#34;Notification policies&#34;width=&#34;2614&#34;height=&#34;1614&#34;title=&#34;Notification policies&#34;/&gt;
        &lt;noscript&gt;
          &lt;img
            src=&#34;/static/img/docs/alerting/unified/notification-policies-8-0.png&#34;
            alt=&#34;Notification policies&#34;width=&#34;2614&#34;height=&#34;1614&#34;title=&#34;Notification policies&#34;/&gt;
        &lt;/noscript&gt;&lt;/div&gt;&lt;figcaption class=&#34;w-100p caption text-gray-13  &#34;&gt;Notification policies&lt;/figcaption&gt;&lt;/a&gt;&lt;/figure&gt;
]]></content><description>&lt;h1 id="notification-policies">Notification policies&lt;/h1>
&lt;p>Notification policies determine how alerts are routed to contact points. Policies have a tree structure, where each policy can have one or more child policies. Each policy, except for the root policy, can also match specific alert labels. Each alert is evaluated by the root policy and subsequently by each child policy. If you enable the &lt;code>Continue matching subsequent sibling nodes&lt;/code> option is enabled for a specific policy, then evaluation continues even after one or more matches. A parent policy’s configuration settings and contact point information govern the behavior of an alert that does not match any of the child policies. A root policy governs any alert that does not match a specific policy.&lt;/p></description></item><item><title>Alert groups</title><link>https://grafana.com/docs/grafana/v9.0/alerting/alert-groups/</link><pubDate>Sun, 12 Apr 2026 12:30:02 +0000</pubDate><guid>https://grafana.com/docs/grafana/v9.0/alerting/alert-groups/</guid><content><![CDATA[&lt;h1 id=&#34;alert-groups&#34;&gt;Alert groups&lt;/h1&gt;
&lt;p&gt;Alert groups show grouped alerts from an Alertmanager instance. By default, the alerts are grouped by the label keys for the root policy in &lt;a href=&#34;../notifications/&#34;&gt;notification policies&lt;/a&gt;. Grouping common alerts into a single alert group prevents duplicate alerts from being fired.&lt;/p&gt;
&lt;p&gt;For more information, see:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;view-alert-grouping/&#34;&gt;View alert groupings&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;filter-alerts/&#34;&gt;Filter alerts by group&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
]]></content><description>&lt;h1 id="alert-groups">Alert groups&lt;/h1>
&lt;p>Alert groups show grouped alerts from an Alertmanager instance. By default, the alerts are grouped by the label keys for the root policy in &lt;a href="../notifications/">notification policies&lt;/a>. Grouping common alerts into a single alert group prevents duplicate alerts from being fired.&lt;/p></description></item><item><title> Alerting high availability</title><link>https://grafana.com/docs/grafana/v9.0/alerting/high-availability/</link><pubDate>Sun, 12 Apr 2026 12:30:02 +0000</pubDate><guid>https://grafana.com/docs/grafana/v9.0/alerting/high-availability/</guid><content><![CDATA[&lt;h1 id=&#34;about-alerting-high-availability&#34;&gt;About alerting high availability&lt;/h1&gt;
&lt;p&gt;The Grafana Alerting system has two main components: a &lt;code&gt;Scheduler&lt;/code&gt; and an internal &lt;code&gt;Alertmanager&lt;/code&gt;. The &lt;code&gt;Scheduler&lt;/code&gt; evaluates your &lt;a href=&#34;../fundamentals/evaluate-grafana-alerts/&#34;&gt;alert rules&lt;/a&gt;, while the internal Alertmanager manages &lt;strong&gt;routing&lt;/strong&gt; and &lt;strong&gt;grouping&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;When running Grafana Alerting in high availability, the operational mode of the scheduler remains unaffected, and each Grafana instance evaluates all alerts. The operational change happens in the Alertmanager when it deduplicates alert notifications across Grafana instances.&lt;/p&gt;
&lt;figure
    class=&#34;figure-wrapper figure-wrapper__lightbox w-100p docs-image--no-shadow&#34;
    style=&#34;max-width: 750px;&#34;
    itemprop=&#34;associatedMedia&#34;
    itemscope=&#34;&#34;
    itemtype=&#34;http://schema.org/ImageObject&#34;
  &gt;&lt;a
        class=&#34;lightbox-link captioned&#34;
        href=&#34;/static/img/docs/alerting/unified/high-availability-ua.png&#34;
        itemprop=&#34;contentUrl&#34;
      &gt;&lt;div class=&#34;img-wrapper w-100p h-auto&#34;&gt;&lt;img
          class=&#34;lazyload mb-0&#34;
          data-src=&#34;/static/img/docs/alerting/unified/high-availability-ua.png&#34;data-srcset=&#34;/static/img/docs/alerting/unified/high-availability-ua.png?w=320 320w, /static/img/docs/alerting/unified/high-availability-ua.png?w=550 550w, /static/img/docs/alerting/unified/high-availability-ua.png?w=750 750w, /static/img/docs/alerting/unified/high-availability-ua.png?w=900 900w, /static/img/docs/alerting/unified/high-availability-ua.png?w=1040 1040w, /static/img/docs/alerting/unified/high-availability-ua.png?w=1240 1240w, /static/img/docs/alerting/unified/high-availability-ua.png?w=1920 1920w&#34;data-sizes=&#34;auto&#34;alt=&#34;High availability&#34;width=&#34;828&#34;height=&#34;262&#34;title=&#34;High availability&#34;/&gt;
        &lt;noscript&gt;
          &lt;img
            src=&#34;/static/img/docs/alerting/unified/high-availability-ua.png&#34;
            alt=&#34;High availability&#34;width=&#34;828&#34;height=&#34;262&#34;title=&#34;High availability&#34;class=&#34;docs-image--no-shadow&#34;/&gt;
        &lt;/noscript&gt;&lt;/div&gt;&lt;figcaption class=&#34;w-100p caption text-gray-13  &#34;&gt;High availability&lt;/figcaption&gt;&lt;/a&gt;&lt;/figure&gt;
&lt;p&gt;The coordination between Grafana instances happens via &lt;a href=&#34;https://en.wikipedia.org/wiki/Gossip_protocol&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;a Gossip protocol&lt;/a&gt;. Alerts are not gossiped between instances and each scheduler delivers the same volume of alerts to each Alertmanager.&lt;/p&gt;
&lt;p&gt;The two types of messages gossiped between Grafana instances are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Notification logs: Who (which instance) notified what (which alert).&lt;/li&gt;
&lt;li&gt;Silences: If an alert should fire or not.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The notification logs and silences are persisted in the database periodically and during a graceful Grafana shut down.&lt;/p&gt;
&lt;p&gt;For configuration instructions, refer to &lt;a href=&#34;enable-alerting-ha/&#34;&gt;enable alerting high availability&lt;/a&gt;.&lt;/p&gt;
]]></content><description>&lt;h1 id="about-alerting-high-availability">About alerting high availability&lt;/h1>
&lt;p>The Grafana Alerting system has two main components: a &lt;code>Scheduler&lt;/code> and an internal &lt;code>Alertmanager&lt;/code>. The &lt;code>Scheduler&lt;/code> evaluates your &lt;a href="../fundamentals/evaluate-grafana-alerts/">alert rules&lt;/a>, while the internal Alertmanager manages &lt;strong>routing&lt;/strong> and &lt;strong>grouping&lt;/strong>.&lt;/p></description></item><item><title>Silences</title><link>https://grafana.com/docs/grafana/v9.0/alerting/silences/</link><pubDate>Sun, 12 Apr 2026 12:30:02 +0000</pubDate><guid>https://grafana.com/docs/grafana/v9.0/alerting/silences/</guid><content><![CDATA[&lt;h1 id=&#34;about-alerting-silences&#34;&gt;About alerting silences&lt;/h1&gt;
&lt;p&gt;Use silences to stop notifications from one or more alerting rules. Silences do not prevent alert rules from being evaluated. Nor do they not stop alerting instances from being shown in the user interface. Silences only stop notifications from getting created. A silence lasts for only a specified window of time.&lt;/p&gt;
&lt;p&gt;You can configure Grafana managed silences as well as silences for an &lt;a href=&#34;../../datasources/alertmanager/&#34;&gt;external Alertmanager data source&lt;/a&gt;. For more information, see &lt;a href=&#34;../fundamentals/alertmanager/&#34;&gt;Alertmanager&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;See also:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;../fundamentals/annotation-label/labels-and-label-matchers/&#34;&gt;How label matching works&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;create-silence/&#34;&gt;Create a silence&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;linking-to-silence-form/&#34;&gt;Create a URL to link to a silence form&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;edit-silence/&#34;&gt;Edit silences&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;remove-silence/&#34;&gt;Remove silences&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
]]></content><description>&lt;h1 id="about-alerting-silences">About alerting silences&lt;/h1>
&lt;p>Use silences to stop notifications from one or more alerting rules. Silences do not prevent alert rules from being evaluated. Nor do they not stop alerting instances from being shown in the user interface. Silences only stop notifications from getting created. A silence lasts for only a specified window of time.&lt;/p></description></item><item><title>Limitations</title><link>https://grafana.com/docs/grafana/v9.0/alerting/alerting-limitations/</link><pubDate>Sun, 12 Apr 2026 12:30:02 +0000</pubDate><guid>https://grafana.com/docs/grafana/v9.0/alerting/alerting-limitations/</guid><content><![CDATA[&lt;h1 id=&#34;limitations&#34;&gt;Limitations&lt;/h1&gt;
&lt;h2 id=&#34;limited-rule-sources-support&#34;&gt;Limited rule sources support&lt;/h2&gt;
&lt;p&gt;Grafana Alerting can retrieve alerting and recording rules &lt;strong&gt;stored&lt;/strong&gt; in most available Prometheus, Loki, Mimir, and Alertmanager compatible data sources.&lt;/p&gt;
&lt;p&gt;It does not support reading or writing alerting rules from any other data sources but the ones previously mentioned at this time.&lt;/p&gt;
&lt;h2 id=&#34;prometheus-version-support&#34;&gt;Prometheus version support&lt;/h2&gt;
&lt;p&gt;We support the latest two minor versions of both Prometheus and Alertmanager. We cannot guarantee that older versions will work.&lt;/p&gt;
&lt;p&gt;As an example, if the current Prometheus version is &lt;code&gt;2.31.1&lt;/code&gt;, we support &amp;gt;= &lt;code&gt;2.29.0&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;grafana-is-not-an-alert-receiver&#34;&gt;Grafana is not an alert receiver&lt;/h2&gt;
&lt;p&gt;Grafana is not an alert receiver; is it an alert generator. This means that Grafana cannot receive alerts from anything other than its internal alert generator.&lt;/p&gt;
&lt;p&gt;Receiving alerts from Prometheus (or anything else) is not supported at the time.&lt;/p&gt;
&lt;p&gt;For more information, refer to &lt;a href=&#34;https://github.com/grafana/grafana/discussions/45773&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;this GitHub discussion&lt;/a&gt;.&lt;/p&gt;
]]></content><description>&lt;h1 id="limitations">Limitations&lt;/h1>
&lt;h2 id="limited-rule-sources-support">Limited rule sources support&lt;/h2>
&lt;p>Grafana Alerting can retrieve alerting and recording rules &lt;strong>stored&lt;/strong> in most available Prometheus, Loki, Mimir, and Alertmanager compatible data sources.&lt;/p></description></item><item><title>Performance considerations</title><link>https://grafana.com/docs/grafana/v9.0/alerting/performance/</link><pubDate>Sun, 12 Apr 2026 12:30:02 +0000</pubDate><guid>https://grafana.com/docs/grafana/v9.0/alerting/performance/</guid><content><![CDATA[&lt;h1 id=&#34;alerting-performance-considerations&#34;&gt;Alerting performance considerations&lt;/h1&gt;
&lt;p&gt;Grafana Alerting supports multi-dimensional alerting, where one alert rule can generate many alerts. For example, you can configure an alert rule to fire an alert every time the CPU of individual VMs max out. This topic discusses performance considerations resulting from multi-dimensional alerting.&lt;/p&gt;
&lt;p&gt;Evaluating alerting rules consumes RAM and CPU to compute the output of an alerting query, and network resources to send alert notifications and write the results to the Grafana SQL database. The configuration of individual alert rules affects the resource consumption and, therefore, the maximum number of rules a given configuration can support.&lt;/p&gt;
&lt;p&gt;The following section provides a list of alerting performance considerations.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Frequency of rule evaluation consideration. The &amp;ldquo;Evaluate Every&amp;rdquo; property of an alert rule controls the frequency of rule evaluation. We recommend using the lowest acceptable evaluation frequency to support more concurrent rules.&lt;/li&gt;
&lt;li&gt;Cardinality of the rule&amp;rsquo;s result set. For example, suppose you are monitoring API response errors for every API path, on every VM in your fleet. This set has a cardinality of &lt;em&gt;n&lt;/em&gt; number of paths multiplied by &lt;em&gt;v&lt;/em&gt; number of VMs. You can reduce the cardinality of a result set - perhaps by monitoring errors-per-VM instead of for each path per VM.&lt;/li&gt;
&lt;li&gt;Complexity of the alerting query consideration. Queries that data sources can process and respond to quickly consume fewer resources. Although this consideration is less important than the other considerations listed above, if you have reduced those as much as possible, looking at individual query performance could make a difference.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each evaluation of an alert rule generates a set of alert instances; one for each member of the result set. The state of all the instances is written to the &lt;code&gt;alert_instance&lt;/code&gt; table in Grafana&amp;rsquo;s SQL database.&lt;/p&gt;
&lt;p&gt;Grafana Alerting exposes a metric, &lt;code&gt;grafana_alerting_rule_evaluations_total&lt;/code&gt; that counts the number of alert rule evaluations. To get a feel for the influence of rule evaluations on your Grafana instance, you can observe the rate of evaluations and compare it with resource consumption. In a Prometheus-compatible database, you can use the query &lt;code&gt;rate(grafana_alerting_rule_evaluations_total[5m])&lt;/code&gt; to compute the rate over 5 minute windows of time. It&amp;rsquo;s important to remember that this isn&amp;rsquo;t the full picture of rule evaluation. For example, the load will be unevenly distributed if you have some rules that evaluate every 10 seconds, and others every 30 minutes.&lt;/p&gt;
&lt;p&gt;These factors all affect the load on the Grafana instance, but you should also be aware of the performance impact that evaluating these rules has on your data sources. Alerting queries are often the vast majority of queries handled by monitoring databases, so the same load factors that affect the Grafana instance affect them as well.&lt;/p&gt;
]]></content><description>&lt;h1 id="alerting-performance-considerations">Alerting performance considerations&lt;/h1>
&lt;p>Grafana Alerting supports multi-dimensional alerting, where one alert rule can generate many alerts. For example, you can configure an alert rule to fire an alert every time the CPU of individual VMs max out. This topic discusses performance considerations resulting from multi-dimensional alerting.&lt;/p></description></item><item><title>Images in notifications</title><link>https://grafana.com/docs/grafana/v9.0/alerting/images-in-notifications/</link><pubDate>Sun, 12 Apr 2026 12:30:02 +0000</pubDate><guid>https://grafana.com/docs/grafana/v9.0/alerting/images-in-notifications/</guid><content><![CDATA[&lt;h1 id=&#34;images-in-notifications&#34;&gt;Images in notifications&lt;/h1&gt;
&lt;p&gt;Images in notifications helps recipients of alert notifications better understand why an alert has fired or resolved by including an image of the panel associated with the Grafana managed alert rule.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Images in notifications are not available for Grafana Mimir and Loki managed alert rules, or when Grafana is set up to send alert notifications to an external Alertmanager.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;If Grafana is set up to send images in notifications, it takes a screenshot of the panel for the Grafana managed alert rule when either of the following happen:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The alert rule transitions from pending to firing&lt;/li&gt;
&lt;li&gt;The alert rule transitions from firing to OK&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Grafana does not support images for alert rules that are not associated with a panel. An alert rule is associated with a panel when it has both Dashboard UID and Panel ID annotations.&lt;/p&gt;
&lt;p&gt;Images are stored in the &lt;a href=&#34;../../setup-grafana/configure-grafana/#paths&#34;&gt;data&lt;/a&gt; path and so Grafana must have write-access to this path. If Grafana cannot write to this path then screenshots cannot be saved to disk and an error will be logged for each failed screenshot attempt. In addition to storing images on disk, Grafana can also store the image in an external image store such as Amazon S3, Azure Blob Storage, Google Cloud Storage and even Grafana where screenshots are stored in &lt;code&gt;public/img/attachments&lt;/code&gt;. Screenshots older than &lt;code&gt;temp_data_lifetime&lt;/code&gt; are deleted from disk but not the external image store. If Grafana is the external image store then screenshots are deleted from &lt;code&gt;data&lt;/code&gt; but not from &lt;code&gt;public/img/attachments&lt;/code&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: It is recommended that you use an external image store, as not all contact points support uploading images from disk. It is also possible that the image on disk is deleted before an alert notification is sent if &lt;code&gt;temp_data_lifetime&lt;/code&gt; is less than the &lt;code&gt;group_wait&lt;/code&gt; and &lt;code&gt;group_interval&lt;/code&gt; options used in Alertmanager.&lt;/p&gt;&lt;/blockquote&gt;
&lt;h2 id=&#34;requirements&#34;&gt;Requirements&lt;/h2&gt;
&lt;p&gt;To use images in notifications, Grafana must be set up to use &lt;a href=&#34;../../setup-grafana/image-rendering/&#34;&gt;image rendering&lt;/a&gt;. It is also recommended that Grafana is set up to upload images to an &lt;a href=&#34;../../setup-grafana/configure-grafana/#external_image_storage&#34;&gt;external image store&lt;/a&gt; such as Amazon S3, Azure Blob Storage, Google Cloud Storage or even Grafana.&lt;/p&gt;
&lt;h2 id=&#34;configuration&#34;&gt;Configuration&lt;/h2&gt;
&lt;p&gt;If Grafana has been set up to use &lt;a href=&#34;../../setup-grafana/image-rendering/&#34;&gt;image rendering&lt;/a&gt; images in notifications can be turned on via the &lt;code&gt;capture&lt;/code&gt; option in &lt;code&gt;[unified_alerting.screenshots]&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Enable screenshots in notifications. This option requires the Grafana Image Renderer plugin.
# For more information on configuration options, refer to [rendering].
capture = true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It is recommended that &lt;code&gt;max_concurrent_screenshots&lt;/code&gt; is set to a value that is less than or equal to &lt;code&gt;concurrent_render_request_limit&lt;/code&gt;. The default value for both &lt;code&gt;max_concurrent_screenshots&lt;/code&gt; and &lt;code&gt;concurrent_render_request_limit&lt;/code&gt; is &lt;code&gt;5&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# The maximum number of screenshots that can be taken at the same time. This option is different from
# concurrent_render_request_limit as max_concurrent_screenshots sets the number of concurrent screenshots
# that can be taken at the same time for all firing alerts where as concurrent_render_request_limit sets
# the total number of concurrent screenshots across all Grafana services.
max_concurrent_screenshots = 5
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If Grafana has been set up to use an external image store, &lt;code&gt;upload_external_image_storage&lt;/code&gt; should be set to &lt;code&gt;true&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Uploads screenshots to the local Grafana server or remote storage such as Azure, S3 and GCS. Please
# see [external_image_storage] for further configuration options. If this option is false, screenshots
# will be persisted to disk for up to temp_data_lifetime.
upload_external_image_storage = false
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Restart Grafana for the changes to take affect.&lt;/p&gt;
&lt;h2 id=&#34;supported-notifiers&#34;&gt;Supported notifiers&lt;/h2&gt;
&lt;p&gt;Images in notifications are supported in the following notifiers and additional support will be added in the future:&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Name&lt;/th&gt;
              &lt;th&gt;Upload images from disk&lt;/th&gt;
              &lt;th&gt;Include images from URL&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;DingDing&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Discord&lt;/td&gt;
              &lt;td&gt;Yes&lt;/td&gt;
              &lt;td&gt;Yes&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Email&lt;/td&gt;
              &lt;td&gt;Yes&lt;/td&gt;
              &lt;td&gt;Yes&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Google Hangouts Chat&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
              &lt;td&gt;Yes&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Kafka&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Line&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Microsoft Teams&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
              &lt;td&gt;Yes&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Opsgenie&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
              &lt;td&gt;Yes&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Pagerduty&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
              &lt;td&gt;Yes&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Prometheus Alertmanager&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Pushover&lt;/td&gt;
              &lt;td&gt;Yes&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Sensu Go&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Slack&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
              &lt;td&gt;Yes&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Telegram&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Threema&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;VictorOps&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Webhook&lt;/td&gt;
              &lt;td&gt;No&lt;/td&gt;
              &lt;td&gt;Yes&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;p&gt;Include images from URL refers to using the external image store.&lt;/p&gt;
&lt;h2 id=&#34;metrics&#34;&gt;Metrics&lt;/h2&gt;
&lt;p&gt;Grafana provides the following metrics to observe the performance and failure rate of images in notifications.
For example, if a screenshot could not be taken within the expected time (10 seconds) then the counter &lt;code&gt;grafana_screenshot_failures_total&lt;/code&gt; is updated.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;grafana_screenshot_cache_hits_total&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;grafana_screenshot_cache_misses_total&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;grafana_screenshot_duration_seconds&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;grafana_screenshot_failures_total&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;grafana_screenshot_successes_total&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;grafana_screenshot_upload_failures_total&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;grafana_screenshot_upload_successes_total&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;limitations&#34;&gt;Limitations&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Images in notifications are not available for Grafana Mimir and Loki managed alert rules, or when Grafana is set up to send alert notifications to an external Alertmanager.&lt;/li&gt;
&lt;li&gt;When alerts generated by different alert rules are sent in a single notification, there may be screenshots for each alert rule. This happens if an alert group contains multiple alerting rules. The order the images are attached is random. If you need to guarantee the ordering of images, make sure that your alert groups contain a single alerting rule.&lt;/li&gt;
&lt;li&gt;Some contact points only handle a single image. In this case, the first image associated with an alert will be attached. Because the ordering is random, this may not always be an image for the same alert rule. If you need to guarantee you receive a screenshot for a particular rule, make sure that your alert groups contain a single alerting rule.&lt;/li&gt;
&lt;/ul&gt;
]]></content><description>&lt;h1 id="images-in-notifications">Images in notifications&lt;/h1>
&lt;p>Images in notifications helps recipients of alert notifications better understand why an alert has fired or resolved by including an image of the panel associated with the Grafana managed alert rule.&lt;/p></description></item></channel></rss>