<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Set up Alerting on Grafana Labs</title><link>https://grafana.com/docs/grafana/v10.2/alerting/set-up/</link><description>Recent content in Set up Alerting on Grafana Labs</description><generator>Hugo -- gohugo.io</generator><language>en</language><atom:link href="/docs/grafana/v10.2/alerting/set-up/index.xml" rel="self" type="application/rss+xml"/><item><title>Upgrade Alerting</title><link>https://grafana.com/docs/grafana/v10.2/alerting/set-up/migrating-alerts/</link><pubDate>Mon, 19 Feb 2024 12:13:11 +0000</pubDate><guid>https://grafana.com/docs/grafana/v10.2/alerting/set-up/migrating-alerts/</guid><content><![CDATA[&lt;h1 id=&#34;upgrade-alerting&#34;&gt;Upgrade Alerting&lt;/h1&gt;
&lt;p&gt;Grafana Alerting is enabled by default for new installations or existing installations whether or not legacy alerting is configured.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;When upgrading, your dashboard alerts are migrated to a new format. This migration can be rolled back easily by opting out. If you have any questions regarding this migration, please contact us.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;Existing installations that do not use legacy alerting will have Grafana Alerting enabled by default unless alerting is disabled in the configuration.&lt;/p&gt;
&lt;p&gt;Likewise, existing installations that use legacy alerting will be automatically upgraded to Grafana Alerting unless you have opted out of Grafana Alerting before migration takes place. During the upgrade, legacy alerts are migrated to the new alerts type and no alerts or alerting data are lost.&lt;/p&gt;
&lt;p&gt;Once the upgrade has taken place, you still have the option to roll back to legacy alerting. However, we do not recommend choosing this option. If you do choose to roll back, Grafana will restore your alerts to the alerts you had at the point in time when the upgrade took place.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Cloud customers, who do not want to upgrade to Grafana Alerting, should contact customer support.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;If you have opted out or rolled back, you can always choose to opt in to Grafana Alerting at a later point in time.&lt;/p&gt;
&lt;p&gt;The following table provides details on the upgrade for Cloud, Enterprise, and OSS installations and the new Grafana Alerting UI.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Grafana instance upgraded to 9.0&lt;/th&gt;
              &lt;th&gt;&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;Cloud&lt;/td&gt;
              &lt;td&gt;Existing Cloud installations with legacy dashboard alerting will have two alerting icons in the left navigation panel - the old alerting plugin icon and the new Grafana Alerting icon. During upgrade, existing alerts from the Cloud alerting plugin are migrated to Grafana Alerting. Once migration is complete, you can access and manage the older alerts from the new alerting Grafana Alerting icon in the navigation panel. The (older) Cloud alerting plugin is uninstalled from your cloud instance. Contact customer support if you &lt;strong&gt;do not wish&lt;/strong&gt; to migrate to Grafana Alerting for your Cloud stack. If you choose to use legacy alerting, use the You will see the new Grafana Alerting icon as well as the old Cloud alerting plugin in the left navigation panel.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Enterprise&lt;/td&gt;
              &lt;td&gt;Existing Enterprise instances using legacy alerting will have both the old (marked as legacy) and the new alerting icons in the navigation panel. During upgrade, existing legacy alerts are migrated to Grafana Alerting. If you wish, you can opt out of Grafana Alerting and roll back to legacy alerting. In that case, you can manage your legacy alerts from the alerting icon marked as legacy.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;OSS&lt;/td&gt;
              &lt;td&gt;Existing OSS installations with legacy dashboard alerting will have two alerting icons in the left navigation panel - the old alerting icon (marked as legacy) and the new Grafana Alerting icon. During upgrade, existing legacy alerts are migrated to Grafana Alerting. If you wish, you can opt out of Grafana Alerting and roll back to legacy alerting. In that case, you can manage your legacy alerts from the alerting icon marked as legacy.&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Starting with v9.0, legacy alerting is deprecated and will be removed in a future release.&lt;/p&gt;&lt;/blockquote&gt;
&lt;h2 id=&#34;opt-out&#34;&gt;Opt out&lt;/h2&gt;
&lt;p&gt;You can opt out of Grafana Alerting at any time and switch to using legacy alerting. Alternatively, you can opt out of using alerting in its entirety.&lt;/p&gt;
&lt;h2 id=&#34;stay-on-legacy-alerting&#34;&gt;Stay on legacy alerting&lt;/h2&gt;
&lt;p&gt;When upgrading to Grafana &amp;gt; 9.0, existing installations that use legacy alerting are automatically upgraded to Grafana Alerting unless you have opted-out of Grafana Alerting before migration takes place. During the upgrade, legacy alerts are migrated to the new alerts type and no alerts or alerting data are lost. To keep using legacy alerting and deactivate Grafana Alerting:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Go to your custom configuration file ($WORKING_DIR/conf/custom.ini).&lt;/li&gt;
&lt;li&gt;Enter the following in your configuration:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;toml&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-toml&#34;&gt;[alerting]
enabled = true

[unified_alerting]
enabled = false&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Installations that have been migrated to Grafana Alerting can roll back to legacy alerting at any time.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;This topic is only relevant for OSS and Enterprise customers. Contact customer support to enable or disable Grafana Alerting for your Grafana Cloud stack.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;ngalert&lt;/code&gt; toggle previously used to enable or disable Grafana Alerting is no longer available.&lt;/p&gt;
&lt;h2 id=&#34;deactivate-alerting&#34;&gt;Deactivate alerting&lt;/h2&gt;
&lt;p&gt;You can deactivate both Grafana Alerting and legacy alerting in Grafana.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Go to your custom configuration file ($WORKING_DIR/conf/custom.ini).&lt;/li&gt;
&lt;li&gt;Enter the following in your configuration:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;toml&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-toml&#34;&gt;[alerting]
enabled = false

[unified_alerting]
enabled = false&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;Restart Grafana for the configuration changes to take effect.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If you want to turn alerting back on, you can remove both flags to enable Grafana Alerting.&lt;/p&gt;
&lt;h2 id=&#34;roll-back&#34;&gt;Roll back&lt;/h2&gt;
&lt;p&gt;Once the upgrade has taken place, you still have the option to roll back to legacy alerting. If you choose to roll back, Grafana will restore your alerts to the alerts you had at the point in time when the upgrade took place.&lt;/p&gt;
&lt;p&gt;To roll back to legacy alerting, enter the following in your configuration:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;toml&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-toml&#34;&gt;[alerting]
enabled = true

[unified_alerting]
enabled = false&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: The next time you upgrade to Grafana Alerting, Grafana will restore your Grafana Alerting alerts and configuration to those you had before rolling back.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;If, after rolling back, you wish to delete any existing Grafana Alerting configuration and upgrade your legacy alerting configuration again from scratch, you can enable the &lt;code&gt;clean_upgrade&lt;/code&gt; option:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;toml&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-toml&#34;&gt;[unified_alerting.upgrade]
clean_upgrade = true&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h2 id=&#34;opt-in&#34;&gt;Opt in&lt;/h2&gt;
&lt;p&gt;If you have previously disabled alerting in Grafana, or opted out of Grafana Alerting and have decided that you would now like to use Grafana Alerting, you can choose to opt in at any time.&lt;/p&gt;
&lt;p&gt;If you have been using legacy alerting up until now your existing alerts will be migrated to the new alerts type and no alerts or alerting data are lost. Even if you choose to opt in to Grafana Alerting, you can roll back to legacy alerting at any time.&lt;/p&gt;
&lt;p&gt;To opt in to Grafana Alerting, enter the following in your configuration:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;toml&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-toml&#34;&gt;[alerting]
enabled = false

[unified_alerting]
enabled = true&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h2 id=&#34;differences-and-limitations&#34;&gt;Differences and limitations&lt;/h2&gt;
&lt;p&gt;There are some differences between Grafana Alerting and legacy dashboard alerts, and a number of features that are no
longer supported.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Differences&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;When Grafana Alerting is enabled or upgraded to Grafana 9.0 or later, existing legacy dashboard alerts migrate in a format compatible with the Grafana Alerting. In the Alerting page of your Grafana instance, you can view the migrated alerts alongside any new alerts.
This topic explains how legacy dashboard alerts are migrated and some limitations of the migration.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Read and write access to legacy dashboard alerts and Grafana alerts are governed by the permissions of the folders storing them. During migration, legacy dashboard alert permissions are matched to the new rules permissions as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If there are dashboard permissions, a folder named &lt;code&gt;Migrated {&amp;quot;dashboardUid&amp;quot;: &amp;quot;UID&amp;quot;, &amp;quot;panelId&amp;quot;: 1, &amp;quot;alertId&amp;quot;: 1}&lt;/code&gt; is created to match the permissions of the dashboard (including the inherited permissions from the folder).&lt;/li&gt;
&lt;li&gt;If there are no dashboard permissions and the dashboard is in a folder, then the rule is linked to this folder and inherits its permissions.&lt;/li&gt;
&lt;li&gt;If there are no dashboard permissions and the dashboard is in the General folder, then the rule is linked to the &lt;code&gt;General Alerting&lt;/code&gt; folder and the rule inherits the default permissions.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;NoData&lt;/code&gt; and &lt;code&gt;Error&lt;/code&gt; settings are migrated as is to the corresponding settings in Grafana Alerting, except in two situations:&lt;/p&gt;
&lt;p&gt;3.1. As there is no &lt;code&gt;Keep Last State&lt;/code&gt; option for &lt;code&gt;No Data&lt;/code&gt; in Grafana Alerting, this option becomes &lt;code&gt;NoData&lt;/code&gt;. The &lt;code&gt;Keep Last State&lt;/code&gt; option for &lt;code&gt;Error&lt;/code&gt; is migrated to a new option &lt;code&gt;Error&lt;/code&gt;. To match the behavior of the &lt;code&gt;Keep Last State&lt;/code&gt;, in both cases, during the migration Grafana automatically creates a silence for each alert rule with a duration of 1 year.&lt;/p&gt;
&lt;p&gt;3.2. Due to lack of validation, legacy alert rules imported via JSON or provisioned along with dashboards can contain arbitrary values for &lt;code&gt;NoData&lt;/code&gt; and &lt;a href=&#34;/docs/sources/alerting/alerting-rules/create-grafana-managed-rule.md#configure-no-data-and-error-handling&#34;&gt;&lt;code&gt;Error&lt;/code&gt;&lt;/a&gt;. In this situation, Grafana will use the default setting: &lt;code&gt;NoData&lt;/code&gt; for No data, and &lt;code&gt;Error&lt;/code&gt; for Error.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Notification channels are migrated to an Alertmanager configuration with the appropriate routes and receivers. Default notification channels are added as contact points to the default route. Notification channels not associated with any Dashboard alert go to the &lt;code&gt;autogen-unlinked-channel-recv&lt;/code&gt; route.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Unlike legacy dashboard alerts where images in notifications are enabled per contact point, images in notifications for Grafana Alerting must be enabled in the Grafana configuration, either in the configuration file or environment variables, and are enabled for either all or no contact points.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The JSON format for webhook notifications has changed in Grafana Alerting and uses the format from &lt;a href=&#34;https://prometheus.io/docs/alerting/latest/configuration/#webhook_config&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Prometheus Alertmanager&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Alerting on Prometheus &lt;code&gt;Both&lt;/code&gt; type queries is not supported in Grafana Alerting. Existing legacy alerts with &lt;code&gt;Both&lt;/code&gt; type queries are migrated to Grafana Alerting as alerts with &lt;code&gt;Range&lt;/code&gt; type queries.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Limitations&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Since &lt;code&gt;Hipchat&lt;/code&gt; and &lt;code&gt;Sensu&lt;/code&gt; notification channels are no longer supported, legacy alerts associated with these channels are not automatically migrated to Grafana Alerting. Assign the legacy alerts to a supported notification channel so that you continue to receive notifications for those alerts.&lt;/li&gt;
&lt;/ol&gt;
]]></content><description>&lt;h1 id="upgrade-alerting">Upgrade Alerting&lt;/h1>
&lt;p>Grafana Alerting is enabled by default for new installations or existing installations whether or not legacy alerting is configured.&lt;/p>
&lt;div class="admonition admonition-note">&lt;blockquote>&lt;p class="title text-uppercase">Note&lt;/p>&lt;p>When upgrading, your dashboard alerts are migrated to a new format. This migration can be rolled back easily by opting out. If you have any questions regarding this migration, please contact us.&lt;/p></description></item><item><title>Add an external Alertmanager</title><link>https://grafana.com/docs/grafana/v10.2/alerting/set-up/configure-alertmanager/</link><pubDate>Tue, 24 Oct 2023 14:34:52 +0000</pubDate><guid>https://grafana.com/docs/grafana/v10.2/alerting/set-up/configure-alertmanager/</guid><content><![CDATA[&lt;h1 id=&#34;add-an-external-alertmanager&#34;&gt;Add an external Alertmanager&lt;/h1&gt;
&lt;p&gt;Set up Grafana to use an external Alertmanager as a single Alertmanager to receive all of your alerts. This external Alertmanager can then be configured and administered from within Grafana itself.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Grafana Alerting does not support sending alerts to the AWS Managed Service for Prometheus due to the lack of sigv4 support in Prometheus.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;Once you have added the Alertmanager, you can use the Grafana Alerting UI to manage silences, contact points, and notification policies. A drop-down option in these pages allows you to switch between alertmanagers.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Starting with Grafana 9.2, the URL configuration of external alertmanagers from the Admin tab on the Alerting page is deprecated. It will be removed in a future release.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;External alertmanagers should now be configured as data sources using Grafana Configuration from the main Grafana navigation menu. This enables you to manage the contact points and notification policies of external alertmanagers from within Grafana and also encrypts HTTP basic authentication credentials that were previously visible when configuring external alertmanagers by URL.&lt;/p&gt;
&lt;p&gt;To add an external Alertmanager, complete the following steps.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;Connections&lt;/strong&gt; in the left-side menu.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On the Connections page, search for &lt;code&gt;Alertmanager&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click the &lt;strong&gt;Create a new data source&lt;/strong&gt; button.&lt;/p&gt;
&lt;p&gt;If you don&amp;rsquo;t see this button, you may need to install the plugin, relaunch your Cloud instance, and then repeat steps 1 and 2.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Fill out the fields on the page, as required.&lt;/p&gt;
&lt;p&gt;If you are provisioning your data source, set the flag &lt;code&gt;handleGrafanaManagedAlerts&lt;/code&gt; in the &lt;code&gt;jsonData&lt;/code&gt; field to &lt;code&gt;true&lt;/code&gt; to send Grafana-managed alerts to this Alertmanager.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Prometheus, Grafana Mimir, and Cortex implementations of Alertmanager are supported. For Prometheus, contact points and notification policies are read-only in the Grafana Alerting UI.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;Save &amp;amp; test&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
]]></content><description>&lt;h1 id="add-an-external-alertmanager">Add an external Alertmanager&lt;/h1>
&lt;p>Set up Grafana to use an external Alertmanager as a single Alertmanager to receive all of your alerts. This external Alertmanager can then be configured and administered from within Grafana itself.&lt;/p></description></item><item><title>Import and export Grafana Alerting resources</title><link>https://grafana.com/docs/grafana/v10.2/alerting/set-up/provision-alerting-resources/</link><pubDate>Mon, 19 Feb 2024 12:13:11 +0000</pubDate><guid>https://grafana.com/docs/grafana/v10.2/alerting/set-up/provision-alerting-resources/</guid><content><![CDATA[&lt;h1 id=&#34;import-and-export-grafana-alerting-resources&#34;&gt;Import and export Grafana Alerting resources&lt;/h1&gt;
&lt;p&gt;Alerting infrastructure is often complex, with many pieces of the pipeline that often live in different places. Scaling this across multiple teams and organizations is an especially challenging task. Importing and exporting (or provisioning) your alerting resources in Grafana Alerting makes this process easier by enabling you to create, manage, and maintain your alerting data in a way that best suits your organization.&lt;/p&gt;
&lt;p&gt;You can import alert rules, contact points, notification policies, mute timings, and templates.&lt;/p&gt;
&lt;p&gt;You cannot edit imported alerting resources in the Grafana UI in the same way as alerting resources that were not imported. You can only edit imported contact points, notification policies, templates, and mute timings in the source where they were created. For example, if you manage your alerting resources using files from disk, you cannot edit the data in Terraform or from within Grafana.&lt;/p&gt;
&lt;p&gt;To modify imported alert rules, you can use the &lt;strong&gt;Modify export&lt;/strong&gt; feature to edit and then export.&lt;/p&gt;
&lt;p&gt;Choose from the options below to import your Grafana Alerting resources.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Use file provisioning to manage your Grafana Alerting resources, such as alert rules and contact points, through files on disk.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;File provisioning is not available in Grafana Cloud instances.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use the Alerting Provisioning HTTP API.&lt;/p&gt;
&lt;p&gt;For more information on the Alerting Provisioning HTTP API, refer to &lt;a href=&#34;/docs/grafana/v10.2/developers/http_api/alerting_provisioning/&#34;&gt;Alerting provisioning HTTP API&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use &lt;a href=&#34;https://www.terraform.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Terraform&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Useful Links:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;/docs/grafana/v10.2/administration/provisioning/&#34;&gt;Grafana provisioning&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;/docs/grafana/v10.2/developers/http_api/alerting_provisioning/&#34;&gt;Grafana Alerting provisioning API&lt;/a&gt;&lt;/p&gt;
]]></content><description>&lt;h1 id="import-and-export-grafana-alerting-resources">Import and export Grafana Alerting resources&lt;/h1>
&lt;p>Alerting infrastructure is often complex, with many pieces of the pipeline that often live in different places. Scaling this across multiple teams and organizations is an especially challenging task. Importing and exporting (or provisioning) your alerting resources in Grafana Alerting makes this process easier by enabling you to create, manage, and maintain your alerting data in a way that best suits your organization.&lt;/p></description></item><item><title>Enable alerting high availability</title><link>https://grafana.com/docs/grafana/v10.2/alerting/set-up/configure-high-availability/</link><pubDate>Tue, 02 Jan 2024 23:42:12 +0000</pubDate><guid>https://grafana.com/docs/grafana/v10.2/alerting/set-up/configure-high-availability/</guid><content><![CDATA[&lt;h1 id=&#34;enable-alerting-high-availability&#34;&gt;Enable alerting high availability&lt;/h1&gt;
&lt;p&gt;You can enable alerting high availability support by updating the Grafana configuration file. If you run Grafana in a Kubernetes cluster, additional steps are required. Both options are described below.
Please note that the deduplication is done for the notification, but the alert will still be evaluated on every Grafana instance. This means that events in alerting state history will be duplicated by the number of Grafana instances running.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;If using a mix of &lt;code&gt;execute_alerts=false&lt;/code&gt; and &lt;code&gt;execute_alerts=true&lt;/code&gt; on the HA nodes, since the alert state is not shared amongst the Grafana instances, the instances with &lt;code&gt;execute_alerts=false&lt;/code&gt; will not show any alert status.
This is because the HA settings (&lt;code&gt;ha_peers&lt;/code&gt;, etc), only apply to the alert notification delivery (i.e. de-duplication of alert notifications, and silences, as mentioned above).&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;h2 id=&#34;enable-alerting-high-availability-in-grafana-using-memberlist&#34;&gt;Enable alerting high availability in Grafana using Memberlist&lt;/h2&gt;
&lt;h3 id=&#34;before-you-begin&#34;&gt;Before you begin&lt;/h3&gt;
&lt;p&gt;Since gossiping of notifications and silences uses both TCP and UDP port &lt;code&gt;9094&lt;/code&gt;, ensure that each Grafana instance is able to accept incoming connections on these ports.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;To enable high availability support:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;In your custom configuration file ($WORKING_DIR/conf/custom.ini), go to the &lt;code&gt;[unified_alerting]&lt;/code&gt; section.&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;[ha_peers]&lt;/code&gt; to the number of hosts for each Grafana instance in the cluster (using a format of host:port), for example, &lt;code&gt;ha_peers=10.0.0.5:9094,10.0.0.6:9094,10.0.0.7:9094&lt;/code&gt;.
You must have at least one (1) Grafana instance added to the &lt;code&gt;ha_peers&lt;/code&gt; section.&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;[ha_listen_address]&lt;/code&gt; to the instance IP address using a format of &lt;code&gt;host:port&lt;/code&gt; (or the &lt;a href=&#34;https://kubernetes.io/docs/concepts/workloads/pods/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Pod&amp;rsquo;s&lt;/a&gt; IP in the case of using Kubernetes).
By default, it is set to listen to all interfaces (&lt;code&gt;0.0.0.0&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;[ha_peer_timeout]&lt;/code&gt; in the &lt;code&gt;[unified_alerting]&lt;/code&gt; section of the custom.ini to specify the time to wait for an instance to send a notification via the Alertmanager. The default value is 15s, but it may increase if Grafana servers are located in different geographic regions or if the network latency between them is high.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;enable-alerting-high-availability-in-grafana-using-redis&#34;&gt;Enable alerting high availability in Grafana using Redis&lt;/h2&gt;
&lt;p&gt;As an alternative to Memberlist, you can use Redis for high availability. This is useful if you want to have a central
database for HA and cannot support the meshing of all Grafana servers.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Make sure you have a redis server that supports pub/sub. If you use a proxy in front of your redis cluster, make sure the proxy supports pub/sub.&lt;/li&gt;
&lt;li&gt;In your custom configuration file ($WORKING_DIR/conf/custom.ini), go to the [unified_alerting] section.&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;ha_redis_address&lt;/code&gt; to the redis server address Grafana should connect to.&lt;/li&gt;
&lt;li&gt;[Optional] Set the username and password if authentication is enabled on the redis server using &lt;code&gt;ha_redis_username&lt;/code&gt; and &lt;code&gt;ha_redis_password&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;[Optional] Set &lt;code&gt;ha_redis_prefix&lt;/code&gt; to something unique if you plan to share the redis server with multiple Grafana instances.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The following metrics can be used for meta monitoring, exposed by Grafana&amp;rsquo;s &lt;code&gt;/metrics&lt;/code&gt; endpoint:&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Metric&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;alertmanager_cluster_messages_received_total&lt;/td&gt;
              &lt;td&gt;Total number of cluster messages received.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;alertmanager_cluster_messages_received_size_total&lt;/td&gt;
              &lt;td&gt;Total size of cluster messages received.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;alertmanager_cluster_messages_sent_total&lt;/td&gt;
              &lt;td&gt;Total number of cluster messages sent.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;alertmanager_cluster_messages_sent_size_total&lt;/td&gt;
              &lt;td&gt;Total number of cluster messages received.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;alertmanager_cluster_messages_publish_failures_total&lt;/td&gt;
              &lt;td&gt;Total number of messages that failed to be published.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;alertmanager_cluster_members&lt;/td&gt;
              &lt;td&gt;Number indicating current number of members in cluster.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;alertmanager_peer_position&lt;/td&gt;
              &lt;td&gt;Position the Alertmanager instance believes it&amp;rsquo;s in. The position determines a peer&amp;rsquo;s behavior in the cluster.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;alertmanager_cluster_pings_seconds&lt;/td&gt;
              &lt;td&gt;Histogram of latencies for ping messages.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;alertmanager_cluster_pings_failures_total&lt;/td&gt;
              &lt;td&gt;Total number of failed pings.&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;h2 id=&#34;enable-alerting-high-availability-using-kubernetes&#34;&gt;Enable alerting high availability using Kubernetes&lt;/h2&gt;
&lt;p&gt;If you are using Kubernetes, you can expose the pod IP &lt;a href=&#34;https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;through an environment variable&lt;/a&gt; via the container definition.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;env:
  - name: POD_IP
    valueFrom:
      fieldRef:
        fieldPath: status.podIP&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;Add the port 9094 to the Grafana deployment:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;ports:
  - containerPort: 3000
    name: http-grafana
    protocol: TCP
  - containerPort: 9094
    name: grafana-alert
    protocol: TCP&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;Add the environment variables to the Grafana deployment:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;env:
  - name: POD_IP
    valueFrom:
      fieldRef:
        fieldPath: status.podIP&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;Create a headless service that returns the pod IP instead of the service IP, which is what the &lt;code&gt;ha_peers&lt;/code&gt; need:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;apiVersion: v1
kind: Service
metadata:
  name: grafana-alerting
  namespace: grafana
  labels:
    app.kubernetes.io/name: grafana-alerting
    app.kubernetes.io/part-of: grafana
spec:
  type: ClusterIP
  clusterIP: &amp;#39;None&amp;#39;
  ports:
    - port: 9094
  selector:
    app: grafana&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;ol start=&#34;4&#34;&gt;
&lt;li&gt;
&lt;p&gt;Make sure your grafana deployment has the label matching the selector, e.g. &lt;code&gt;app:grafana&lt;/code&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add in the grafana.ini:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;Bash&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-bash&#34;&gt;[unified_alerting]
enabled = true
ha_listen_address = &amp;#34;${POD_IP}:9094&amp;#34;
ha_peers = &amp;#34;grafana-alerting.grafana:9094&amp;#34;
ha_advertise_address = &amp;#34;${POD_IP}:9094&amp;#34;
ha_peer_timeout = 15s&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
]]></content><description>&lt;h1 id="enable-alerting-high-availability">Enable alerting high availability&lt;/h1>
&lt;p>You can enable alerting high availability support by updating the Grafana configuration file. If you run Grafana in a Kubernetes cluster, additional steps are required. Both options are described below.
Please note that the deduplication is done for the notification, but the alert will still be evaluated on every Grafana instance. This means that events in alerting state history will be duplicated by the number of Grafana instances running.&lt;/p></description></item><item><title>Configure Alert State History</title><link>https://grafana.com/docs/grafana/v10.2/alerting/set-up/configure-alert-state-history/</link><pubDate>Fri, 12 Jan 2024 15:34:35 +0000</pubDate><guid>https://grafana.com/docs/grafana/v10.2/alerting/set-up/configure-alert-state-history/</guid><content><![CDATA[&lt;h1 id=&#34;configure-alert-state-history&#34;&gt;Configure Alert State History&lt;/h1&gt;
&lt;p&gt;Starting with Grafana 10, Alerting can record all alert rule state changes for your Grafana managed alert rules in a Loki instance.&lt;/p&gt;
&lt;p&gt;This allows you to explore the behavior of your alert rules in the Grafana explore view and levels up the existing state history modal with a powerful new visualisation.&lt;/p&gt;
&lt;!-- image here, maybe the one from the blog? --&gt;
&lt;h2 id=&#34;configuring-loki&#34;&gt;Configuring Loki&lt;/h2&gt;
&lt;p&gt;To set up alert state history, make sure to have a Loki instance Grafana can write data to. The default settings might need some tweaking as the state history modal might query up to 30 days of data.&lt;/p&gt;
&lt;p&gt;The following change to the default configuration should work for most instances, but we recommend looking at the full Loki configuration settings and adjust according to your needs.&lt;/p&gt;
&lt;p&gt;As this might impact the performances of an existing Loki instance, we recommend using a separate Loki instance for the alert state history.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;limits_config:
  split_queries_by_interval: &amp;#39;24h&amp;#39;
  max_query_parallelism: 32&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h2 id=&#34;configuring-grafana&#34;&gt;Configuring Grafana&lt;/h2&gt;
&lt;p&gt;We need some additional configuration in the Grafana configuration file to have it working with the alert state history.&lt;/p&gt;
&lt;p&gt;The example below instructs Grafana to write alert state history to a local Loki instance:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;toml&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-toml&#34;&gt;[unified_alerting.state_history]
enabled = true
backend = &amp;#34;loki&amp;#34;
loki_remote_url = &amp;#34;http://localhost:3100&amp;#34;

[feature_toggles]
enable = alertStateHistoryLokiSecondary, alertStateHistoryLokiPrimary, alertStateHistoryLokiOnly&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;!-- TODO can we add some more info here about the feature flags and the various different supported setups with Loki as Primary / Secondary, etc? --&gt;
&lt;h2 id=&#34;adding-the-loki-data-source&#34;&gt;Adding the Loki data source&lt;/h2&gt;
&lt;p&gt;See our instructions on &lt;a href=&#34;/docs/grafana/latest/administration/data-source-management/&#34;&gt;adding a data source&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;querying-the-history&#34;&gt;Querying the history&lt;/h2&gt;
&lt;p&gt;If everything is set up correctly you can use the Grafana Explore view to start querying the Loki data source.&lt;/p&gt;
&lt;p&gt;A simple litmus test to see if data is being written correctly into the Loki instance is the following query:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;logQL&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-logql&#34;&gt;{ from=&amp;#34;state-history&amp;#34; } | json&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
]]></content><description>&lt;h1 id="configure-alert-state-history">Configure Alert State History&lt;/h1>
&lt;p>Starting with Grafana 10, Alerting can record all alert rule state changes for your Grafana managed alert rules in a Loki instance.&lt;/p></description></item><item><title>Performance considerations and limitations</title><link>https://grafana.com/docs/grafana/v10.2/alerting/set-up/performance-limitations/</link><pubDate>Tue, 24 Oct 2023 14:34:52 +0000</pubDate><guid>https://grafana.com/docs/grafana/v10.2/alerting/set-up/performance-limitations/</guid><content><![CDATA[&lt;h1 id=&#34;performance-considerations-and-limitations&#34;&gt;Performance considerations and limitations&lt;/h1&gt;
&lt;p&gt;Grafana Alerting supports multi-dimensional alerting, where one alert rule can generate many alerts. For example, you can configure an alert rule to fire an alert every time the CPU of individual VMs max out. This topic discusses performance considerations resulting from multi-dimensional alerting.&lt;/p&gt;
&lt;p&gt;Evaluating alerting rules consumes RAM and CPU to compute the output of an alerting query, and network resources to send alert notifications and write the results to the Grafana SQL database. The configuration of individual alert rules affects the resource consumption and, therefore, the maximum number of rules a given configuration can support.&lt;/p&gt;
&lt;p&gt;The following section provides a list of alerting performance considerations.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Frequency of rule evaluation consideration. The &amp;ldquo;Evaluate Every&amp;rdquo; property of an alert rule controls the frequency of rule evaluation. We recommend using the lowest acceptable evaluation frequency to support more concurrent rules.&lt;/li&gt;
&lt;li&gt;Cardinality of the rule&amp;rsquo;s result set. For example, suppose you are monitoring API response errors for every API path, on every VM in your fleet. This set has a cardinality of &lt;em&gt;n&lt;/em&gt; number of paths multiplied by &lt;em&gt;v&lt;/em&gt; number of VMs. You can reduce the cardinality of a result set - perhaps by monitoring errors-per-VM instead of for each path per VM.&lt;/li&gt;
&lt;li&gt;Complexity of the alerting query consideration. Queries that data sources can process and respond to quickly consume fewer resources. Although this consideration is less important than the other considerations listed above, if you have reduced those as much as possible, looking at individual query performance could make a difference.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each evaluation of an alert rule generates a set of alert instances; one for each member of the result set. The state of all the instances is written to the &lt;code&gt;alert_instance&lt;/code&gt; table in Grafana&amp;rsquo;s SQL database. This number of write-heavy operations can cause issues when using SQLite.&lt;/p&gt;
&lt;p&gt;Grafana Alerting exposes a metric, &lt;code&gt;grafana_alerting_rule_evaluations_total&lt;/code&gt; that counts the number of alert rule evaluations. To get a feel for the influence of rule evaluations on your Grafana instance, you can observe the rate of evaluations and compare it with resource consumption. In a Prometheus-compatible database, you can use the query &lt;code&gt;rate(grafana_alerting_rule_evaluations_total[5m])&lt;/code&gt; to compute the rate over 5 minute windows of time. It&amp;rsquo;s important to remember that this isn&amp;rsquo;t the full picture of rule evaluation. For example, the load will be unevenly distributed if you have some rules that evaluate every 10 seconds, and others every 30 minutes.&lt;/p&gt;
&lt;p&gt;These factors all affect the load on the Grafana instance, but you should also be aware of the performance impact that evaluating these rules has on your data sources. Alerting queries are often the vast majority of queries handled by monitoring databases, so the same load factors that affect the Grafana instance affect them as well.&lt;/p&gt;
&lt;h2 id=&#34;limited-rule-sources-support&#34;&gt;Limited rule sources support&lt;/h2&gt;
&lt;p&gt;Grafana Alerting can retrieve alerting and recording rules &lt;strong&gt;stored&lt;/strong&gt; in most available Prometheus, Loki, Mimir, and Alertmanager compatible data sources.&lt;/p&gt;
&lt;p&gt;It does not support reading or writing alerting rules from any other data sources but the ones previously mentioned at this time.&lt;/p&gt;
&lt;h2 id=&#34;prometheus-version-support&#34;&gt;Prometheus version support&lt;/h2&gt;
&lt;p&gt;We support the latest two minor versions of both Prometheus and Alertmanager. We cannot guarantee that older versions will work.&lt;/p&gt;
&lt;p&gt;As an example, if the current Prometheus version is &lt;code&gt;2.31.1&lt;/code&gt;, we support &amp;gt;= &lt;code&gt;2.29.0&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;the-grafana-alertmanager-can-only-receive-grafana-managed-alerts&#34;&gt;The Grafana Alertmanager can only receive Grafana managed alerts&lt;/h2&gt;
&lt;p&gt;Grafana cannot be used to receive external alerts. You can only send alerts to the Grafana Alertmanager using Grafana managed alerts.&lt;/p&gt;
&lt;p&gt;You have the option to send Grafana managed alerts to an external Alertmanager, you can find this option in the admin tab on the Alerting page.&lt;/p&gt;
&lt;p&gt;For more information, refer to &lt;a href=&#34;https://github.com/grafana/grafana/discussions/45773&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;this GitHub discussion&lt;/a&gt;.&lt;/p&gt;
]]></content><description>&lt;h1 id="performance-considerations-and-limitations">Performance considerations and limitations&lt;/h1>
&lt;p>Grafana Alerting supports multi-dimensional alerting, where one alert rule can generate many alerts. For example, you can configure an alert rule to fire an alert every time the CPU of individual VMs max out. This topic discusses performance considerations resulting from multi-dimensional alerting.&lt;/p></description></item></channel></rss>