<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Set up Alerting on Grafana Labs</title><link>https://grafana.com/docs/grafana/v11.0/alerting/set-up/</link><description>Recent content in Set up Alerting on Grafana Labs</description><generator>Hugo -- gohugo.io</generator><language>en</language><atom:link href="/docs/grafana/v11.0/alerting/set-up/index.xml" rel="self" type="application/rss+xml"/><item><title>Add an external Alertmanager</title><link>https://grafana.com/docs/grafana/v11.0/alerting/set-up/configure-alertmanager/</link><pubDate>Fri, 06 Mar 2026 07:23:54 +0000</pubDate><guid>https://grafana.com/docs/grafana/v11.0/alerting/set-up/configure-alertmanager/</guid><content><![CDATA[&lt;h1 id=&#34;add-an-external-alertmanager&#34;&gt;Add an external Alertmanager&lt;/h1&gt;
&lt;p&gt;Set up Grafana to use an external Alertmanager as a single Alertmanager to receive all of your alerts. This external Alertmanager can then be configured and administered from within Grafana itself.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Grafana Alerting does not support sending alerts to the AWS Managed Service for Prometheus due to the lack of sigv4 support in Prometheus.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;Once you have added the Alertmanager, you can use the Grafana Alerting UI to manage silences, contact points, and notification policies. A drop-down option in these pages allows you to switch between alertmanagers.&lt;/p&gt;
&lt;p&gt;External alertmanagers should now be configured as data sources using Grafana Configuration from the main Grafana navigation menu. This enables you to manage the contact points and notification policies of external alertmanagers from within Grafana and also encrypts HTTP basic authentication credentials.&lt;/p&gt;
&lt;p&gt;To add an external Alertmanager, complete the following steps.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;Connections&lt;/strong&gt; in the left-side menu.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On the Connections page, search for &lt;code&gt;Alertmanager&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click the &lt;strong&gt;Create a new data source&lt;/strong&gt; button.&lt;/p&gt;
&lt;p&gt;If you don&amp;rsquo;t see this button, you may need to install the plugin, relaunch your Cloud instance, and then repeat steps 1 and 2.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Fill out the fields on the page, as required.&lt;/p&gt;
&lt;p&gt;If you are provisioning your data source, set the flag &lt;code&gt;handleGrafanaManagedAlerts&lt;/code&gt; in the &lt;code&gt;jsonData&lt;/code&gt; field to &lt;code&gt;true&lt;/code&gt; to send Grafana-managed alerts to this Alertmanager.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Prometheus, Grafana Mimir, and Cortex implementations of Alertmanager are supported. For Prometheus, contact points and notification policies are read-only in the Grafana Alerting UI.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;Save &amp;amp; test&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
]]></content><description>&lt;h1 id="add-an-external-alertmanager">Add an external Alertmanager&lt;/h1>
&lt;p>Set up Grafana to use an external Alertmanager as a single Alertmanager to receive all of your alerts. This external Alertmanager can then be configured and administered from within Grafana itself.&lt;/p></description></item><item><title>Configure alert state history</title><link>https://grafana.com/docs/grafana/v11.0/alerting/set-up/configure-alert-state-history/</link><pubDate>Fri, 06 Mar 2026 07:23:54 +0000</pubDate><guid>https://grafana.com/docs/grafana/v11.0/alerting/set-up/configure-alert-state-history/</guid><content><![CDATA[&lt;h1 id=&#34;configure-alert-state-history&#34;&gt;Configure alert state history&lt;/h1&gt;
&lt;p&gt;Starting with Grafana 10, Alerting can record all alert rule state changes for your Grafana managed alert rules in a Loki instance.&lt;/p&gt;
&lt;p&gt;This allows you to explore the behavior of your alert rules in the Grafana explore view and levels up the existing state history modal with a powerful new visualisation.&lt;/p&gt;
&lt;!-- image here, maybe the one from the blog? --&gt;
&lt;h2 id=&#34;configuring-loki&#34;&gt;Configuring Loki&lt;/h2&gt;
&lt;p&gt;To set up alert state history, make sure to have a Loki instance Grafana can write data to. The default settings might need some tweaking as the state history modal might query up to 30 days of data.&lt;/p&gt;
&lt;p&gt;The following change to the default configuration should work for most instances, but we recommend looking at the full Loki configuration settings and adjust according to your needs.&lt;/p&gt;
&lt;p&gt;As this might impact the performances of an existing Loki instance, we recommend using a separate Loki instance for the alert state history.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;limits_config:
  split_queries_by_interval: &amp;#39;24h&amp;#39;
  max_query_parallelism: 32&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h2 id=&#34;configuring-grafana&#34;&gt;Configuring Grafana&lt;/h2&gt;
&lt;p&gt;We need some additional configuration in the Grafana configuration file to have it working with the alert state history.&lt;/p&gt;
&lt;p&gt;The example below instructs Grafana to write alert state history to a local Loki instance:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;toml&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-toml&#34;&gt;[unified_alerting.state_history]
enabled = true
backend = &amp;#34;loki&amp;#34;
loki_remote_url = &amp;#34;http://localhost:3100&amp;#34;

[feature_toggles]
enable = alertStateHistoryLokiSecondary, alertStateHistoryLokiPrimary, alertStateHistoryLokiOnly&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;!-- TODO can we add some more info here about the feature flags and the various different supported setups with Loki as Primary / Secondary, etc? --&gt;
&lt;h2 id=&#34;adding-the-loki-data-source&#34;&gt;Adding the Loki data source&lt;/h2&gt;
&lt;p&gt;See our instructions on &lt;a href=&#34;/docs/grafana/latest/administration/data-source-management/&#34;&gt;adding a data source&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;querying-the-history&#34;&gt;Querying the history&lt;/h2&gt;
&lt;p&gt;If everything is set up correctly you can use the Grafana Explore view to start querying the Loki data source.&lt;/p&gt;
&lt;p&gt;A simple litmus test to see if data is being written correctly into the Loki instance is the following query:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;logQL&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-logql&#34;&gt;{ from=&amp;#34;state-history&amp;#34; } | json&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
]]></content><description>&lt;h1 id="configure-alert-state-history">Configure alert state history&lt;/h1>
&lt;p>Starting with Grafana 10, Alerting can record all alert rule state changes for your Grafana managed alert rules in a Loki instance.&lt;/p></description></item><item><title>Provision Alerting resources</title><link>https://grafana.com/docs/grafana/v11.0/alerting/set-up/provision-alerting-resources/</link><pubDate>Fri, 06 Mar 2026 07:23:54 +0000</pubDate><guid>https://grafana.com/docs/grafana/v11.0/alerting/set-up/provision-alerting-resources/</guid><content><![CDATA[&lt;h1 id=&#34;provision-alerting-resources&#34;&gt;Provision Alerting resources&lt;/h1&gt;
&lt;p&gt;Alerting infrastructure is often complex, with many pieces of the pipeline that often live in different places. Scaling this across multiple teams and organizations is an especially challenging task. Importing and exporting (or provisioning) your alerting resources in Grafana Alerting makes this process easier by enabling you to create, manage, and maintain your alerting data in a way that best suits your organization.&lt;/p&gt;
&lt;p&gt;You can import alert rules, contact points, notification policies, mute timings, and templates.&lt;/p&gt;
&lt;p&gt;You cannot edit imported alerting resources in the Grafana UI in the same way as alerting resources that were not imported. You can only edit imported contact points, notification policies, templates, and mute timings in the source where they were created. For example, if you manage your alerting resources using files from disk, you cannot edit the data in Terraform or from within Grafana.&lt;/p&gt;
&lt;h2 id=&#34;import-alerting-resources&#34;&gt;Import alerting resources&lt;/h2&gt;
&lt;p&gt;Choose from the options below to import (or provision) your Grafana Alerting resources.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;
    &lt;a href=&#34;/docs/grafana/v11.0/alerting/set-up/provision-alerting-resources/file-provisioning/&#34;&gt;Use configuration files to provision your alerting resources&lt;/a&gt;, such as alert rules and contact points, through files on disk.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;ul&gt;
&lt;li&gt;You cannot edit provisioned resources from files in the Grafana UI.&lt;/li&gt;
&lt;li&gt;Provisioning with configuration files is not available in Grafana Cloud.&lt;/li&gt;
&lt;/ul&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use 
    &lt;a href=&#34;/docs/grafana/v11.0/alerting/set-up/provision-alerting-resources/terraform-provisioning/&#34;&gt;Terraform to provision alerting resources&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use the 
    &lt;a href=&#34;/docs/grafana/v11.0/alerting/set-up/provision-alerting-resources/http-api-provisioning/&#34;&gt;Alerting provisioning HTTP API&lt;/a&gt; to manage alerting resources.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;The Alerting provisioning HTTP API can be used to create, modify, and delete resources for Grafana-managed alerts.&lt;/p&gt;
&lt;p&gt;To manage resources related to data source-managed alerts, including recording rules, use the Mimir or Cortex tool.&lt;/p&gt;
&lt;p&gt;The JSON output from the majority of Alerting HTTP endpoints isn&amp;rsquo;t compatible for provisioning via configuration files.&lt;/p&gt;
&lt;p&gt;If you need the alerting resources for file provisioning, use 
    &lt;a href=&#34;/docs/grafana/v11.0/alerting/set-up/provision-alerting-resources/export-alerting-resources/#export-api-endpoints&#34;&gt;Export Alerting endpoints&lt;/a&gt; to return or download them in provisioning format.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;export-alerting-resources&#34;&gt;Export alerting resources&lt;/h2&gt;
&lt;p&gt;You can export both manually created and provisioned alerting resources. You can also edit and export an alert rule without applying the changes.&lt;/p&gt;
&lt;p&gt;For detailed instructions on the various export options, refer to 
    &lt;a href=&#34;/docs/grafana/v11.0/alerting/set-up/provision-alerting-resources/export-alerting-resources/&#34;&gt;Export alerting resources&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;view-provisioned-alerting-resources&#34;&gt;View provisioned alerting resources&lt;/h2&gt;
&lt;p&gt;To view your provisioned resources in Grafana, complete the following steps.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Open your Grafana instance.&lt;/li&gt;
&lt;li&gt;Navigate to Alerting.&lt;/li&gt;
&lt;li&gt;Click an alerting resource folder, for example, Alert rules.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Provisioned resources are labeled &lt;strong&gt;Provisioned&lt;/strong&gt;, so that it is clear that they were not created manually.&lt;/p&gt;
]]></content><description>&lt;h1 id="provision-alerting-resources">Provision Alerting resources&lt;/h1>
&lt;p>Alerting infrastructure is often complex, with many pieces of the pipeline that often live in different places. Scaling this across multiple teams and organizations is an especially challenging task. Importing and exporting (or provisioning) your alerting resources in Grafana Alerting makes this process easier by enabling you to create, manage, and maintain your alerting data in a way that best suits your organization.&lt;/p></description></item><item><title>Configure high availability</title><link>https://grafana.com/docs/grafana/v11.0/alerting/set-up/configure-high-availability/</link><pubDate>Fri, 06 Mar 2026 07:23:54 +0000</pubDate><guid>https://grafana.com/docs/grafana/v11.0/alerting/set-up/configure-high-availability/</guid><content><![CDATA[&lt;h1 id=&#34;configure-high-availability&#34;&gt;Configure high availability&lt;/h1&gt;
&lt;p&gt;Grafana Alerting uses the Prometheus model of separating the evaluation of alert rules from the delivering of notifications. In this model, the evaluation of alert rules is done in the alert generator and the delivering of notifications is done in the alert receiver. In Grafana Alerting, the alert generator is the Scheduler and the receiver is the Alertmanager.&lt;/p&gt;
&lt;figure
    class=&#34;figure-wrapper figure-wrapper__lightbox w-100p docs-image--no-shadow&#34;
    style=&#34;max-width: 750px;&#34;
    itemprop=&#34;associatedMedia&#34;
    itemscope=&#34;&#34;
    itemtype=&#34;http://schema.org/ImageObject&#34;
  &gt;&lt;a
        class=&#34;lightbox-link captioned&#34;
        href=&#34;/static/img/docs/alerting/unified/high-availability-ua.png&#34;
        itemprop=&#34;contentUrl&#34;
      &gt;&lt;div class=&#34;img-wrapper w-100p h-auto&#34;&gt;&lt;img
          class=&#34;lazyload mb-0&#34;
          data-src=&#34;/static/img/docs/alerting/unified/high-availability-ua.png&#34;data-srcset=&#34;/static/img/docs/alerting/unified/high-availability-ua.png?w=320 320w, /static/img/docs/alerting/unified/high-availability-ua.png?w=550 550w, /static/img/docs/alerting/unified/high-availability-ua.png?w=750 750w, /static/img/docs/alerting/unified/high-availability-ua.png?w=900 900w, /static/img/docs/alerting/unified/high-availability-ua.png?w=1040 1040w, /static/img/docs/alerting/unified/high-availability-ua.png?w=1240 1240w, /static/img/docs/alerting/unified/high-availability-ua.png?w=1920 1920w&#34;data-sizes=&#34;auto&#34;alt=&#34;High availability&#34;width=&#34;828&#34;height=&#34;262&#34;title=&#34;High availability&#34;/&gt;
        &lt;noscript&gt;
          &lt;img
            src=&#34;/static/img/docs/alerting/unified/high-availability-ua.png&#34;
            alt=&#34;High availability&#34;width=&#34;828&#34;height=&#34;262&#34;title=&#34;High availability&#34;class=&#34;docs-image--no-shadow&#34;/&gt;
        &lt;/noscript&gt;&lt;/div&gt;&lt;figcaption class=&#34;w-100p caption text-gray-13  &#34;&gt;High availability&lt;/figcaption&gt;&lt;/a&gt;&lt;/figure&gt;
&lt;p&gt;When running multiple instances of Grafana, all alert rules are evaluated on all instances. You can think of the evaluation of alert rules as being duplicated by the number of running Grafana instances. This is how Grafana Alerting makes sure that as long as at least one Grafana instance is working, alert rules will still be evaluated and notifications for alerts will still be sent.&lt;/p&gt;
&lt;p&gt;You can find this duplication in state history and it is a good way to confirm if you are using high availability.&lt;/p&gt;
&lt;p&gt;While the alert generator evaluates all alert rules on all instances, the alert receiver makes a best-effort attempt to avoid sending duplicate notifications. Alertmanager chooses availability over consistency, which may result in occasional duplicated or out-of-order notifications. It takes the opinion that duplicate or out-of-order notifications are better than no notifications.&lt;/p&gt;
&lt;p&gt;The Alertmanager uses a gossip protocol to share information about notifications between Grafana instances. It also gossips silences, which means a silence created on one Grafana instance is replicated to all other Grafana instances. Both notifications and silences are persisted to the database periodically, and during graceful shut down.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;If using a mix of &lt;code&gt;execute_alerts=false&lt;/code&gt; and &lt;code&gt;execute_alerts=true&lt;/code&gt; on the HA nodes, since the alert state is not shared amongst the Grafana instances, the instances with &lt;code&gt;execute_alerts=false&lt;/code&gt; will not show any alert status.
This is because the HA settings (&lt;code&gt;ha_peers&lt;/code&gt;, etc), only apply to the alert notification delivery (i.e. de-duplication of alert notifications, and silences, as mentioned above).&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;h2 id=&#34;enable-alerting-high-availability-using-memberlist&#34;&gt;Enable alerting high availability using Memberlist&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Before you begin&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Since gossiping of notifications and silences uses both TCP and UDP port &lt;code&gt;9094&lt;/code&gt;, ensure that each Grafana instance is able to accept incoming connections on these ports.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;To enable high availability support:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;In your custom configuration file ($WORKING_DIR/conf/custom.ini), go to the &lt;code&gt;[unified_alerting]&lt;/code&gt; section.&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;[ha_peers]&lt;/code&gt; to the number of hosts for each Grafana instance in the cluster (using a format of host:port), for example, &lt;code&gt;ha_peers=10.0.0.5:9094,10.0.0.6:9094,10.0.0.7:9094&lt;/code&gt;.
You must have at least one (1) Grafana instance added to the &lt;code&gt;ha_peers&lt;/code&gt; section.&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;[ha_listen_address]&lt;/code&gt; to the instance IP address using a format of &lt;code&gt;host:port&lt;/code&gt; (or the &lt;a href=&#34;https://kubernetes.io/docs/concepts/workloads/pods/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Pod&amp;rsquo;s&lt;/a&gt; IP in the case of using Kubernetes).
By default, it is set to listen to all interfaces (&lt;code&gt;0.0.0.0&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;[ha_peer_timeout]&lt;/code&gt; in the &lt;code&gt;[unified_alerting]&lt;/code&gt; section of the custom.ini to specify the time to wait for an instance to send a notification via the Alertmanager. The default value is 15s, but it may increase if Grafana servers are located in different geographic regions or if the network latency between them is high.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;enable-alerting-high-availability-using-redis&#34;&gt;Enable alerting high availability using Redis&lt;/h2&gt;
&lt;p&gt;As an alternative to Memberlist, you can use Redis for high availability. This is useful if you want to have a central
database for HA and cannot support the meshing of all Grafana servers.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Make sure you have a redis server that supports pub/sub. If you use a proxy in front of your redis cluster, make sure the proxy supports pub/sub.&lt;/li&gt;
&lt;li&gt;In your custom configuration file ($WORKING_DIR/conf/custom.ini), go to the &lt;code&gt;[unified_alerting]&lt;/code&gt; section.&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;ha_redis_address&lt;/code&gt; to the redis server address Grafana should connect to.&lt;/li&gt;
&lt;li&gt;Optional: Set the username and password if authentication is enabled on the redis server using &lt;code&gt;ha_redis_username&lt;/code&gt; and &lt;code&gt;ha_redis_password&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Optional: Set &lt;code&gt;ha_redis_prefix&lt;/code&gt; to something unique if you plan to share the redis server with multiple Grafana instances.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The following metrics can be used for meta monitoring, exposed by Grafana&amp;rsquo;s &lt;code&gt;/metrics&lt;/code&gt; endpoint:&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Metric&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;alertmanager_cluster_messages_received_total&lt;/td&gt;
              &lt;td&gt;Total number of cluster messages received.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;alertmanager_cluster_messages_received_size_total&lt;/td&gt;
              &lt;td&gt;Total size of cluster messages received.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;alertmanager_cluster_messages_sent_total&lt;/td&gt;
              &lt;td&gt;Total number of cluster messages sent.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;alertmanager_cluster_messages_sent_size_total&lt;/td&gt;
              &lt;td&gt;Total number of cluster messages received.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;alertmanager_cluster_messages_publish_failures_total&lt;/td&gt;
              &lt;td&gt;Total number of messages that failed to be published.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;alertmanager_cluster_members&lt;/td&gt;
              &lt;td&gt;Number indicating current number of members in cluster.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;alertmanager_peer_position&lt;/td&gt;
              &lt;td&gt;Position the Alertmanager instance believes it&amp;rsquo;s in. The position determines a peer&amp;rsquo;s behavior in the cluster.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;alertmanager_cluster_pings_seconds&lt;/td&gt;
              &lt;td&gt;Histogram of latencies for ping messages.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;alertmanager_cluster_pings_failures_total&lt;/td&gt;
              &lt;td&gt;Total number of failed pings.&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;h2 id=&#34;enable-alerting-high-availability-using-kubernetes&#34;&gt;Enable alerting high availability using Kubernetes&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;You can expose the pod IP &lt;a href=&#34;https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;through an environment variable&lt;/a&gt; via the container definition.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;env:
  - name: POD_IP
    valueFrom:
      fieldRef:
        fieldPath: status.podIP&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add the port 9094 to the Grafana deployment:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;ports:
  - containerPort: 3000
    name: http-grafana
    protocol: TCP
  - containerPort: 9094
    name: grafana-alert
    protocol: TCP&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add the environment variables to the Grafana deployment:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;env:
  - name: POD_IP
    valueFrom:
      fieldRef:
        fieldPath: status.podIP&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a headless service that returns the pod IP instead of the service IP, which is what the &lt;code&gt;ha_peers&lt;/code&gt; need:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;apiVersion: v1
kind: Service
metadata:
  name: grafana-alerting
  namespace: grafana
  labels:
    app.kubernetes.io/name: grafana-alerting
    app.kubernetes.io/part-of: grafana
spec:
  type: ClusterIP
  clusterIP: &amp;#39;None&amp;#39;
  ports:
    - port: 9094
  selector:
    app: grafana&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Make sure your grafana deployment has the label matching the selector, e.g. &lt;code&gt;app:grafana&lt;/code&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add in the grafana.ini:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;Bash&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-bash&#34;&gt;[unified_alerting]
enabled = true
ha_listen_address = &amp;#34;${POD_IP}:9094&amp;#34;
ha_peers = &amp;#34;grafana-alerting.grafana:9094&amp;#34;
ha_advertise_address = &amp;#34;${POD_IP}:9094&amp;#34;
ha_peer_timeout = 15s&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
]]></content><description>&lt;h1 id="configure-high-availability">Configure high availability&lt;/h1>
&lt;p>Grafana Alerting uses the Prometheus model of separating the evaluation of alert rules from the delivering of notifications. In this model, the evaluation of alert rules is done in the alert generator and the delivering of notifications is done in the alert receiver. In Grafana Alerting, the alert generator is the Scheduler and the receiver is the Alertmanager.&lt;/p></description></item><item><title>Performance considerations and limitations</title><link>https://grafana.com/docs/grafana/v11.0/alerting/set-up/performance-limitations/</link><pubDate>Fri, 06 Mar 2026 07:23:54 +0000</pubDate><guid>https://grafana.com/docs/grafana/v11.0/alerting/set-up/performance-limitations/</guid><content><![CDATA[&lt;h1 id=&#34;performance-considerations-and-limitations&#34;&gt;Performance considerations and limitations&lt;/h1&gt;
&lt;p&gt;Grafana Alerting supports multi-dimensional alerting, where one alert rule can generate many alerts. For example, you can configure an alert rule to fire an alert every time the CPU of individual VMs max out. This topic discusses performance considerations resulting from multi-dimensional alerting.&lt;/p&gt;
&lt;p&gt;Evaluating alerting rules consumes RAM and CPU to compute the output of an alerting query, and network resources to send alert notifications and write the results to the Grafana SQL database. The configuration of individual alert rules affects the resource consumption and, therefore, the maximum number of rules a given configuration can support.&lt;/p&gt;
&lt;p&gt;The following section provides a list of alerting performance considerations.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Frequency of rule evaluation consideration. The &amp;ldquo;Evaluate Every&amp;rdquo; property of an alert rule controls the frequency of rule evaluation. We recommend using the lowest acceptable evaluation frequency to support more concurrent rules.&lt;/li&gt;
&lt;li&gt;Cardinality of the rule&amp;rsquo;s result set. For example, suppose you are monitoring API response errors for every API path, on every VM in your fleet. This set has a cardinality of &lt;em&gt;n&lt;/em&gt; number of paths multiplied by &lt;em&gt;v&lt;/em&gt; number of VMs. You can reduce the cardinality of a result set - perhaps by monitoring errors-per-VM instead of for each path per VM.&lt;/li&gt;
&lt;li&gt;Complexity of the alerting query consideration. Queries that data sources can process and respond to quickly consume fewer resources. Although this consideration is less important than the other considerations listed above, if you have reduced those as much as possible, looking at individual query performance could make a difference.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each evaluation of an alert rule generates a set of alert instances; one for each member of the result set. The state of all the instances is written to the &lt;code&gt;alert_instance&lt;/code&gt; table in Grafana&amp;rsquo;s SQL database. This number of write-heavy operations can cause issues when using SQLite.&lt;/p&gt;
&lt;p&gt;Grafana Alerting exposes a metric, &lt;code&gt;grafana_alerting_rule_evaluations_total&lt;/code&gt; that counts the number of alert rule evaluations. To get a feel for the influence of rule evaluations on your Grafana instance, you can observe the rate of evaluations and compare it with resource consumption. In a Prometheus-compatible database, you can use the query &lt;code&gt;rate(grafana_alerting_rule_evaluations_total[5m])&lt;/code&gt; to compute the rate over 5 minute windows of time. It&amp;rsquo;s important to remember that this isn&amp;rsquo;t the full picture of rule evaluation. For example, the load will be unevenly distributed if you have some rules that evaluate every 10 seconds, and others every 30 minutes.&lt;/p&gt;
&lt;p&gt;These factors all affect the load on the Grafana instance, but you should also be aware of the performance impact that evaluating these rules has on your data sources. Alerting queries are often the vast majority of queries handled by monitoring databases, so the same load factors that affect the Grafana instance affect them as well.&lt;/p&gt;
&lt;h2 id=&#34;limited-rule-sources-support&#34;&gt;Limited rule sources support&lt;/h2&gt;
&lt;p&gt;Grafana Alerting can retrieve alerting and recording rules &lt;strong&gt;stored&lt;/strong&gt; in most available Prometheus, Loki, Mimir, and Alertmanager compatible data sources.&lt;/p&gt;
&lt;p&gt;It does not support reading or writing alerting rules from any other data sources but the ones previously mentioned at this time.&lt;/p&gt;
&lt;h2 id=&#34;prometheus-version-support&#34;&gt;Prometheus version support&lt;/h2&gt;
&lt;p&gt;We support the latest two minor versions of both Prometheus and Alertmanager. We cannot guarantee that older versions will work.&lt;/p&gt;
&lt;p&gt;As an example, if the current Prometheus version is &lt;code&gt;2.31.1&lt;/code&gt;, we support &amp;gt;= &lt;code&gt;2.29.0&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;the-grafana-alertmanager-can-only-receive-grafana-managed-alerts&#34;&gt;The Grafana Alertmanager can only receive Grafana managed alerts&lt;/h2&gt;
&lt;p&gt;Grafana cannot be used to receive external alerts. You can only send alerts to the Grafana Alertmanager using Grafana managed alerts.&lt;/p&gt;
&lt;p&gt;You have the option to send Grafana managed alerts to an external Alertmanager, you can find this option in the admin tab on the Alerting page.&lt;/p&gt;
&lt;p&gt;For more information, refer to &lt;a href=&#34;https://github.com/grafana/grafana/issues/73447&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;this GitHub issue&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;high-load-on-database-caused-by-a-high-number-of-alert-instances&#34;&gt;High load on database caused by a high number of alert instances&lt;/h2&gt;
&lt;p&gt;If you have a high number of alert instances, it can happen that the load on the database gets very high, as each state
transition of an alert instance will be saved in the database.&lt;/p&gt;
&lt;p&gt;This can be prevented by writing to the database periodically. For this the feature flag &lt;code&gt;alertingSaveStatePeriodic&lt;/code&gt; needs
to be enabled. By default it will save the states every 5 minutes to the database and on each shutdown. The periodic interval
can also be configured using the &lt;code&gt;state_periodic_save_interval&lt;/code&gt; configuration flag.&lt;/p&gt;
&lt;p&gt;The time it takes to write to the database periodically can be monitored using the &lt;code&gt;state_full_sync_duration_seconds&lt;/code&gt; metric
that is exposed by Grafana.&lt;/p&gt;
]]></content><description>&lt;h1 id="performance-considerations-and-limitations">Performance considerations and limitations&lt;/h1>
&lt;p>Grafana Alerting supports multi-dimensional alerting, where one alert rule can generate many alerts. For example, you can configure an alert rule to fire an alert every time the CPU of individual VMs max out. This topic discusses performance considerations resulting from multi-dimensional alerting.&lt;/p></description></item></channel></rss>