<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Monitor Tempo on Grafana Labs</title><link>https://grafana.com/docs/tempo/v2.10.x/operations/monitor/</link><description>Recent content in Monitor Tempo on Grafana Labs</description><generator>Hugo -- gohugo.io</generator><language>en</language><atom:link href="/docs/tempo/v2.10.x/operations/monitor/index.xml" rel="self" type="application/rss+xml"/><item><title>Set up monitoring for Tempo</title><link>https://grafana.com/docs/tempo/v2.10.x/operations/monitor/set-up-monitoring/</link><pubDate>Thu, 09 Apr 2026 14:59:14 +0000</pubDate><guid>https://grafana.com/docs/tempo/v2.10.x/operations/monitor/set-up-monitoring/</guid><content><![CDATA[&lt;h1 id=&#34;set-up-monitoring-for-tempo&#34;&gt;Set up monitoring for Tempo&lt;/h1&gt;
&lt;p&gt;You can set up monitoring for Tempo using an existing or new cluster.
If you don&amp;rsquo;t have a cluster available, you can use the linked documentation to set up the Tempo, Mimir, and Grafana using Helm or you can use Grafana Cloud.&lt;/p&gt;
&lt;p&gt;You can use this procedure to set up monitoring for Tempo running in monolithic (single binary) or microservices modes.&lt;/p&gt;
&lt;p&gt;To set up monitoring, you need to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use Grafana Alloy to remote-write to Tempo and set up Grafana to visualize the tracing data by following 
    &lt;a href=&#34;/docs/tempo/v2.10.x/setup/set-up-test-app/&#34;&gt;Set up a test app&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Update your Alloy configuration to scrape metrics to monitor for your Tempo data.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This procedure assumes that you have set up Tempo 
    &lt;a href=&#34;/docs/tempo/v2.10.x/setup/helm-chart/&#34;&gt;using the Helm chart&lt;/a&gt; with 
    &lt;a href=&#34;/docs/alloy/v2.10.x/set-up/install/&#34;&gt;Grafana Alloy&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The steps outlined below use the Alloy configurations described in 
    &lt;a href=&#34;/docs/tempo/v2.10.x/setup/set-up-test-app/&#34;&gt;Set up a test application for a Tempo cluster&lt;/a&gt;.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Update any instructions in this document for your own deployment.&lt;/p&gt;
&lt;p&gt;If you use the 
    &lt;a href=&#34;/docs/alloy/v2.10.x/set-up/install/kubernetes/&#34;&gt;Kubernetes integration Grafana Alloy Helm chart&lt;/a&gt;, you can use the Kubernetes scrape annotations to automatically scrape Tempo.
You’ll need to add the labels to all of the deployed components.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;h2 id=&#34;before-you-begin&#34;&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;To configure monitoring using the examples on this page, you’ll need the following running in your Kubernetes environment:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Tempo instance - For storing traces and emitting metrics (
    &lt;a href=&#34;/docs/tempo/v2.10.x/setup/helm-chart/&#34;&gt;install using the &lt;code&gt;tempo-distributed&lt;/code&gt; Helm chart&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Mimir - For storing metrics emitted from Tempo (&lt;a href=&#34;/docs/helm-charts/mimir-distributed/latest/get-started-helm-charts/&#34;&gt;install using the &lt;code&gt;mimir-distributed&lt;/code&gt; Helm chart&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Grafana - For visualizing traces and metrics (
    &lt;a href=&#34;/docs/grafana/next/setup-grafana/installation/kubernetes/#deploy-grafana-oss-on-kubernetes&#34;&gt;install on Kubernetes&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can use Grafana Alloy or the OpenTelemetry Collector. This procedure provides examples only for Grafana Alloy.&lt;/p&gt;
&lt;p&gt;The rest of this documentation assumes that the Tempo, Grafana, and Mimir instances use the same Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;If you are using Grafana Cloud, you can skip the installation sections and set up the &lt;a href=&#34;/docs/grafana-cloud/connect-externally-hosted/data-sources/prometheus/&#34;&gt;Mimir (Prometheus)&lt;/a&gt; and &lt;a href=&#34;/docs/grafana-cloud/connect-externally-hosted/data-sources/tempo/&#34;&gt;Tempo data sources&lt;/a&gt; in your Grafana instance.&lt;/p&gt;
&lt;h2 id=&#34;use-a-test-app-for-tempo-to-send-data-to-grafana&#34;&gt;Use a test app for Tempo to send data to Grafana&lt;/h2&gt;
&lt;p&gt;Before you can monitor Tempo data, you need to configure Grafana Alloy to send traces to Tempo.&lt;/p&gt;
&lt;p&gt;Use &lt;a href=&#34;/docs/tempo/latest/setup/set-up-test-app/&#34;&gt;these instructions to create a test application&lt;/a&gt; in your Tempo cluster.
These steps configure Grafana Alloy to &lt;code&gt;remote-write&lt;/code&gt; to Tempo.
In addition, the test app instructions explain how to configure a Tempo data source in Grafana and view the tracing data.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;If you already have a Tempo environment, then there is no need to create a test app.
This guide assumes that the Tempo and Grafana Alloy configurations are the same as or based on &lt;a href=&#34;/docs/tempo/latest/setup/set-up-test-app/&#34;&gt;these instructions to create a test application&lt;/a&gt;, as you&amp;rsquo;ll augment those configurations to enable Tempo metrics monitoring.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;In these examples, Tempo is installed in a namespace called &lt;code&gt;tempo&lt;/code&gt;.
Change this namespace name in the examples as needed to fit your own environment.&lt;/p&gt;
&lt;h2 id=&#34;configure-grafana&#34;&gt;Configure Grafana&lt;/h2&gt;
&lt;p&gt;In your Grafana instance, you&amp;rsquo;ll need:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
    &lt;a href=&#34;/docs/grafana/next/datasources/tempo/configure-tempo-data-source/&#34;&gt;A Tempo data source&lt;/a&gt; (created in the previous section)&lt;/li&gt;
&lt;li&gt;A 
    &lt;a href=&#34;/docs/grafana/next/datasources/prometheus/&#34;&gt;Mimir (Prometheus) data source&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;div data-shared=&#34;metamonitoring.md&#34;&gt;
            &lt;p&gt;Metamonitoring for Tempo is handled by the &lt;a href=&#34;https://github.com/grafana/k8s-monitoring-helm&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Grafana Kubernetes Helm chart&lt;/a&gt; (&amp;gt;=v2.1). Metamonitoring can be used with both microservices and single binary deployments of Tempo.&lt;/p&gt;
&lt;p&gt;The Helm chart configures Grafana Alloy to collect metrics and logs.&lt;/p&gt;
&lt;h2 id=&#34;steps&#34;&gt;Steps&lt;/h2&gt;
&lt;p&gt;This procudure uses the &lt;a href=&#34;https://github.com/grafana/k8s-monitoring-helm&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Grafana Kubernetes Helm chart&lt;/a&gt; and the &lt;code&gt;values.yml&lt;/code&gt; file sets parameters in the Helm chart.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Add the Grafana Helm Chart repository, or update, if already added.&lt;/p&gt;

&lt;div class=&#34;code-snippet code-snippet__mini&#34;&gt;&lt;div class=&#34;lang-toolbar__mini&#34;&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet code-snippet__border&#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-none&#34;&gt;helm repo add grafana https://grafana.github.io/helm-charts  
helm repo update&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a new file named &lt;code&gt;values.yml&lt;/code&gt;. Add the following example into your &lt;code&gt;values.yml&lt;/code&gt; file and save it. Where indicated, add the values specific to your instance.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;cluster:
    name: traces # Name of the cluster. This populates the cluster label.

integrations:
    tempo:
        instances:
          - name: &amp;#34;traces&amp;#34; # This is the name for the instance label that reports.
            namespaces:
                - traces # This is the namespace that is searched for Tempo instances. Change this accordingly.
            metrics:
                enabled: true
                portName: prom-metrics
            logs:
                enabled: true
            labelSelectors:
                app.kubernetes.io/name: tempo

alloy:
    name: &amp;#34;traces-monitoring&amp;#34;

destinations:
- name: &amp;#34;metrics&amp;#34;
  type: prometheus
  url: &amp;#34;&amp;lt;url&amp;gt;&amp;#34; # URL for Prometheus. Should look similar to &amp;#34;https://&amp;lt;prometheus host&amp;gt;/api/prom/push&amp;#34;.
  auth:
    type: basic
    username: &amp;#34;&amp;lt;username&amp;gt;&amp;#34;
    password: &amp;#34;&amp;lt;password&amp;gt;&amp;#34;

- name: &amp;#34;logs&amp;#34;
  type: loki
  url: &amp;#34;&amp;lt;url&amp;gt;&amp;#34; # URL for Loki. Should look similar to &amp;#34;https://&amp;lt;loki host&amp;gt;/loki/api/v1/push&amp;#34;.
  auth:
    type: basic
    username: &amp;#34;&amp;lt;username&amp;gt;&amp;#34; 
    password: &amp;#34;&amp;lt;password&amp;gt;&amp;#34;

alloy-metrics:
    enabled: true

podLogs:
    enabled: true
    gatherMethod: kubernetesApi
    namespaces: [traces] # Set to namespace from above under instances.
    collector: alloy-singleton

alloy-singleton:
    enabled: true

alloy-metrics:
    enabled: true # Sends Grafana Alloy metrics to ensure the monitoring is working properly.&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install the Helm chart using the following command to create Grafana Alloy instances to scrape metrics and logs:&lt;/p&gt;

&lt;div class=&#34;code-snippet code-snippet__mini&#34;&gt;&lt;div class=&#34;lang-toolbar__mini&#34;&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet code-snippet__border&#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-none&#34;&gt;helm install k8s-monitoring grafana/k8s-monitoring \
--namespace monitoring \
--create-namespace \
-f values.yml&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verify that data is being sent to Grafana.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Log into Grafana.&lt;/li&gt;
&lt;li&gt;Select Metrics Drilldown and select &lt;code&gt;cluster=&amp;lt;cluster.name&amp;gt;&lt;/code&gt; where &lt;code&gt;cluster.name&lt;/code&gt; is the name specified in the &lt;code&gt;values.yml&lt;/code&gt; file.&lt;/li&gt;
&lt;li&gt;Do the same for Logs Drilldown.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This example doesn’t include ingestion for any other data such as traces for sending to Tempo, but can be included with some configuration updates.
Refer to 
    &lt;a href=&#34;/docs/tempo/v2.10.x/setup/set-up-test-app/&#34;&gt;Configure Alloy to remote-write to Tempo&lt;/a&gt; for more information.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;install-tempo-dashboards-in-grafana&#34;&gt;Install Tempo dashboards in Grafana&lt;/h2&gt;
&lt;p&gt;Alloy scrapes metrics from Tempo and sends them to Mimir or another Prometheus compatible time-series database.
You can then monitor Tempo using the mixins.&lt;/p&gt;
&lt;p&gt;Tempo ships with mixins that includes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Relevant dashboards for overseeing the health of Tempo as a whole, as well as its individual components&lt;/li&gt;
&lt;li&gt;Recording rules that simplify the generation of metrics for dashboards and free-form queries&lt;/li&gt;
&lt;li&gt;Alerts that trigger when Tempo falls out of operational parameters&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To install the mixins in Grafana, you need to:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Download the mixin dashboards from the Tempo repository.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Import the dashboards in your Grafana instance.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Upload &lt;code&gt;alerts.yaml&lt;/code&gt; and &lt;code&gt;rules.yaml&lt;/code&gt; files for Mimir or Prometheus&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;download-the-tempo-mixin-dashboards&#34;&gt;Download the &lt;code&gt;tempo-mixin&lt;/code&gt; dashboards&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;First, clone the Tempo repository from Github:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;Bash&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-bash&#34;&gt;git clone git&amp;#43;ssh://github.com/grafana/tempo&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once you have a local copy of the repository, navigate to the &lt;code&gt;operations/tempo-mixin-compiled&lt;/code&gt; directory.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;Bash&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-bash&#34;&gt;cd operations/tempo-mixin-compiled&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This contains a compiled version of the alert and recording rules, as well as the dashboards.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;If you want to change any of the mixins, make your updates in the &lt;code&gt;operations/tempo-mixin&lt;/code&gt; directory.
Use the instructions in the &lt;a href=&#34;https://github.com/grafana/tempo/tree/main/operations/tempo-mixin&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;README&lt;/a&gt; in that directory to regenerate the files.
The mixins are generated in the &lt;code&gt;operations/tempo-mixin-compiled&lt;/code&gt; directory.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;h3 id=&#34;import-the-dashboards-to-grafana&#34;&gt;Import the dashboards to Grafana&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;dashboards&lt;/code&gt; directory includes the six monitoring dashboards that can be installed into your Grafana instance.
Refer to &lt;a href=&#34;/docs/grafana/latest/dashboards/build-dashboards/import-dashboards/&#34;&gt;Import a dashboard &lt;/a&gt;in the Grafana documentation.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-tip&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Tip&lt;/p&gt;&lt;p&gt;Install all six dashboards.
You can only import one dashboard at a time.
Create a new folder in the Dashboards area, for example “Tempo Monitoring”, as an easy location to save the imported dashboards.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;To create a folder:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Open your Grafana instance and select &lt;strong&gt;Dashboards&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;New&lt;/strong&gt; in the right corner.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;New folder&lt;/strong&gt; from the &lt;strong&gt;New&lt;/strong&gt; drop-down.&lt;/li&gt;
&lt;li&gt;Name your folder, for example, “Tempo Monitoring”.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Create&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;To import a dashboard:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Open your Grafana instance and select &lt;strong&gt;Dashboards&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;New&lt;/strong&gt; in the right corner.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Import&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;On the &lt;strong&gt;Import dashboard&lt;/strong&gt; screen, select &lt;strong&gt;Upload.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Browse to &lt;code&gt;operations/tempo-mixin-compiled/dashboards&lt;/code&gt; and select the dashboard to import.&lt;/li&gt;
&lt;li&gt;Drag the dashboard file, for example, &lt;code&gt;tempo-operational.json&lt;/code&gt;, onto the &lt;strong&gt;Upload&lt;/strong&gt; area of the &lt;strong&gt;Import dashboard&lt;/strong&gt; screen. Alternatively, you can browse to and select a file.&lt;/li&gt;
&lt;li&gt;Select a folder in the &lt;strong&gt;Folder&lt;/strong&gt; drop-down where you want to save the imported dashboard. For example, select Tempo Monitoring created in the earlier steps.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Import&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The imported files are listed in the Tempo Monitoring dashboard folder.&lt;/p&gt;
&lt;p&gt;To view the dashboards in Grafana:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Select Dashboards in your Grafana instance.&lt;/li&gt;
&lt;li&gt;Select Tempo Monitoring, or the folder where you uploaded the imported dashboards.&lt;/li&gt;
&lt;li&gt;Select any files in the folder to view it.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The ‘Tempo Operational’ dashboard shows read (query) information:&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;/media/docs/tempo/screenshot-tempo-ops-dashboard.png&#34;
  alt=&#34;Tempo Operational dashboard&#34; width=&#34;1248&#34;
     height=&#34;955&#34;/&gt;&lt;/p&gt;
&lt;h3 id=&#34;add-alerts-and-rules-to-prometheus-or-mimir&#34;&gt;Add alerts and rules to Prometheus or Mimir&lt;/h3&gt;
&lt;p&gt;The rules and alerts need to be installed into your Mimir or Prometheus instance.
To do this in Prometheus, refer to the &lt;a href=&#34;https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;recording rules&lt;/a&gt; and &lt;a href=&#34;https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;alerting rules&lt;/a&gt; documentation.&lt;/p&gt;
&lt;p&gt;For Mimir, you can use &lt;a href=&#34;/docs/mimir/latest/manage/tools/mimirtool/&#34;&gt;mimirtool&lt;/a&gt; to upload &lt;a href=&#34;/docs/mimir/latest/manage/tools/mimirtool/#rules&#34;&gt;rule&lt;/a&gt; and &lt;a href=&#34;/docs/mimir/latest/manage/tools/mimirtool/#alertmanager&#34;&gt;alert&lt;/a&gt; configuration.
Using a default installation of Mimir used as the metrics store for the Alloy configuration, you might run the following:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;Bash&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-bash&#34;&gt;mimirtool rules load operations/tempo-mixin-compiles/rules.yml --address=https://mimir-cluster.distributor.mimir.svc.cluster.local:9001

mimirtool alertmanager load operations/tempo-mixin-compiles/alerts.yml --address=https://mimir-cluster.distributor.mimir.svc.cluster.local:9001&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;For Grafana Cloud, you need to add the username and API key as well.
Refer to the &lt;a href=&#34;/docs/mimir/latest/manage/tools/mimirtool/&#34;&gt;mimirtool&lt;/a&gt; documentation for more information.&lt;/p&gt;
&lt;/div&gt;

        
]]></content><description>&lt;h1 id="set-up-monitoring-for-tempo">Set up monitoring for Tempo&lt;/h1>
&lt;p>You can set up monitoring for Tempo using an existing or new cluster.
If you don&amp;rsquo;t have a cluster available, you can use the linked documentation to set up the Tempo, Mimir, and Grafana using Helm or you can use Grafana Cloud.&lt;/p></description></item><item><title>Use polling to monitor the backend status</title><link>https://grafana.com/docs/tempo/v2.10.x/operations/monitor/polling/</link><pubDate>Thu, 09 Apr 2026 14:59:14 +0000</pubDate><guid>https://grafana.com/docs/tempo/v2.10.x/operations/monitor/polling/</guid><content><![CDATA[&lt;h1 id=&#34;use-polling-to-monitor-the-backend-status&#34;&gt;Use polling to monitor the backend status&lt;/h1&gt;
&lt;p&gt;Tempo maintains knowledge of the state of the backend by polling it on regular intervals. There are only
only a few components that need this knowledge: compactors, schedulers, workers, queriers and query-frontends.&lt;/p&gt;
&lt;p&gt;To reduce calls to the backend, only the compactors and workers perform a &amp;ldquo;full&amp;rdquo; poll against the backend and update the tenant indexes. This process lists all blocks for a given tenant and determines their state. The ring is used to split the work of writing the tenant indexes for all tenants.&lt;/p&gt;
&lt;p&gt;The remaining components will only read the tenant index, and fall back to a full poll only if the index is too far out of date.&lt;/p&gt;
&lt;p&gt;For both the read and write of the tenant index, the update is performed once each &lt;code&gt;blocklist_poll&lt;/code&gt; duration.&lt;/p&gt;
&lt;p&gt;The index is written in two formats: both a &lt;code&gt;gzip&lt;/code&gt; compressed JSON located at &lt;code&gt;/&amp;lt;tenant&amp;gt;/index.json.gz&lt;/code&gt; and a zstd compressed proto encoded object located at &lt;code&gt;/&amp;lt;tenant/index.pb.zst&lt;/code&gt;. Only the proto object is read, falling back to the JSON if the proto does not exist, which should only happen as part of the transition to the new format. These indexes contain an entry for every block and compacted block for the tenant.&lt;/p&gt;
&lt;p&gt;Due to this behavior, a given poller will always have an out-of-date blocklist.
During normal operation, the index will be stale by at most twice the configured &lt;code&gt;blocklist_poll&lt;/code&gt;. An index which is out of date by greater than the &lt;code&gt;blocklist_poll&lt;/code&gt; duration and will affect which blocks are queryable, and poller configuration adjustments may need to be made in order to keep up with the size of the blocklist.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;For details about configuring polling, refer to &lt;a href=&#34;../../../configuration/polling/&#34;&gt;polling configuration&lt;/a&gt;.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;h2 id=&#34;monitor-polling-with-dashboards-and-alerts&#34;&gt;Monitor polling with dashboards and alerts&lt;/h2&gt;
&lt;p&gt;Refer to the Jsonnet for example &lt;a href=&#34;https://github.com/grafana/tempo/blob/main/operations/tempo-mixin/alerts.libsonnet&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;alerts&lt;/a&gt; and &lt;a href=&#34;https://github.com/grafana/tempo/blob/main/operations/tempo-mixin/runbook.md&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;runbook entries&lt;/a&gt;
related to polling.&lt;/p&gt;
&lt;p&gt;If you are building your own dashboards or alerts, here are a few relevant metrics:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;tempodb_blocklist_poll_errors_total&lt;/code&gt;
A holistic metric that increments for any error with polling the blocklist. Any increase in this metric should be reviewed.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;tempodb_blocklist_poll_duration_seconds&lt;/code&gt;
Histogram recording the length of time in seconds to poll the entire blocklist.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;tempodb_blocklist_length&lt;/code&gt;
Total blocks as seen by this component.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;tempodb_blocklist_tenant_index_errors_total&lt;/code&gt;
A holistic metrics that indcrements for any error building the tenant index. Any increase in this metric should be reviewed.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;tempodb_blocklist_tenant_index_builder&lt;/code&gt;
A gauge that has the value 1 if this compactor is attempting to build the tenant index and 0 if it is not. At least one compactor
must have this value set to 1 for the system to be working.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;tempodb_blocklist_tenant_index_age_seconds&lt;/code&gt;
The age of the last loaded tenant index. now() minus this value indicates how stale this components view of the blocklist is.&lt;/li&gt;
&lt;/ul&gt;
]]></content><description>&lt;h1 id="use-polling-to-monitor-the-backend-status">Use polling to monitor the backend status&lt;/h1>
&lt;p>Tempo maintains knowledge of the state of the backend by polling it on regular intervals. There are only
only a few components that need this knowledge: compactors, schedulers, workers, queriers and query-frontends.&lt;/p></description></item><item><title>Monitor query I/O and span timestamp distance</title><link>https://grafana.com/docs/tempo/v2.10.x/operations/monitor/query-io-and-timestamp-distance/</link><pubDate>Thu, 09 Apr 2026 14:59:14 +0000</pubDate><guid>https://grafana.com/docs/tempo/v2.10.x/operations/monitor/query-io-and-timestamp-distance/</guid><content><![CDATA[&lt;!-- markdownlint-disable MD025 --&gt;
&lt;h1 id=&#34;monitor-query-io-and-span-timestamp-distance&#34;&gt;Monitor query I/O and span timestamp distance&lt;/h1&gt;
&lt;p&gt;You can use these metrics to monitor query I/O and span timestamp quality:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;query_frontend_bytes_inspected_total&lt;/code&gt; measures how many bytes the frontend reads per request type and tenant. This value shows the total number of bytes read from disk and object storage.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;spans_distance_in_future_seconds&lt;/code&gt; and &lt;code&gt;spans_distance_in_past_seconds&lt;/code&gt; measure how far a span end time is from the ingestion time. This capability lets you find customers that send spans too far in the future or past, which may not be found using the Search API.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Use these metrics together to correlate query cost with data quality and pipeline health.&lt;/p&gt;
&lt;h2 id=&#34;reference&#34;&gt;Reference&lt;/h2&gt;
&lt;p&gt;The query frontend emits &lt;code&gt;query_frontend_bytes_inspected_total&lt;/code&gt; when a request finishes, aggregating bytes inspected by queriers.&lt;/p&gt;
&lt;p&gt;The distributor emits &lt;code&gt;spans_distance_in_future_seconds&lt;/code&gt; and &lt;code&gt;spans_distance_in_past_seconds&lt;/code&gt; by comparing span end time with ingestion time.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Names&lt;/th&gt;
              &lt;th&gt;Type&lt;/th&gt;
              &lt;th&gt;Labels&lt;/th&gt;
              &lt;th&gt;Buckets&lt;/th&gt;
              &lt;th&gt;Emitted&lt;/th&gt;
              &lt;th&gt;Notes&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;query_frontend_bytes_inspected_total &lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Counter&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;tenant&lt;/code&gt;, &lt;code&gt;op&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;-&lt;/td&gt;
              &lt;td&gt;On request completion at the query frontend; aggregates bytes from queriers; excludes cached querier responses.&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;spans_distance_in_future_seconds&lt;/code&gt;, &lt;code&gt;spans_distance_in_past_seconds&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Histogram&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;tenant&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;300s, 1800s, 3600s (5m, 30m, 1h)&lt;/td&gt;
              &lt;td&gt;In the distributor on ingest; observes seconds between span end time and ingestion time.&lt;/td&gt;
              &lt;td&gt;Spans in the future are accepted but invalid and might not be searchable.&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;h2 id=&#34;promql-examples&#34;&gt;PromQL examples&lt;/h2&gt;
&lt;p&gt;To see how frequently future-dated spans arrive by tenant, use the histogram count rate:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;promql&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-promql&#34;&gt;sum by (tenant) (
  rate(tempo_spans_distance_in_future_seconds_count[5m])
)&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Inspect query read throughput (&lt;code&gt;bytes/s&lt;/code&gt;) by tenant and operation:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;promql&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-promql&#34;&gt;sum by (tenant, op) (
  rate(tempo_query_frontend_bytes_inspected_total[5m])
)&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Top five tenants by inspected GiB over the last hour:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;promql&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-promql&#34;&gt;topk(
  5,
  sum by (tenant) (increase(tempo_query_frontend_bytes_inspected_total[1h])) / 1024 / 1024 / 1024
)&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;To quantify ingestion delay using the past-distance histogram, chart the P90 over time:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;promql&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-promql&#34;&gt;histogram_quantile(
  0.9,
  sum by (tenant, le) (
    rate(tempo_spans_distance_in_past_seconds_bucket[15m])
  )
)&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
]]></content><description>&lt;!-- markdownlint-disable MD025 -->
&lt;h1 id="monitor-query-io-and-span-timestamp-distance">Monitor query I/O and span timestamp distance&lt;/h1>
&lt;p>You can use these metrics to monitor query I/O and span timestamp quality:&lt;/p>
&lt;ul>
&lt;li>&lt;code>query_frontend_bytes_inspected_total&lt;/code> measures how many bytes the frontend reads per request type and tenant. This value shows the total number of bytes read from disk and object storage.&lt;/li>
&lt;li>&lt;code>spans_distance_in_future_seconds&lt;/code> and &lt;code>spans_distance_in_past_seconds&lt;/code> measure how far a span end time is from the ingestion time. This capability lets you find customers that send spans too far in the future or past, which may not be found using the Search API.&lt;/li>
&lt;/ul>
&lt;p>Use these metrics together to correlate query cost with data quality and pipeline health.&lt;/p></description></item></channel></rss>