<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Issues with sending traces on Grafana Labs</title><link>https://grafana.com/docs/tempo/v2.10.x/troubleshooting/send-traces/</link><description>Recent content in Issues with sending traces on Grafana Labs</description><generator>Hugo -- gohugo.io</generator><language>en</language><atom:link href="/docs/tempo/v2.10.x/troubleshooting/send-traces/index.xml" rel="self" type="application/rss+xml"/><item><title>Distributor refusing spans</title><link>https://grafana.com/docs/tempo/v2.10.x/troubleshooting/send-traces/max-trace-limit-reached/</link><pubDate>Thu, 09 Apr 2026 14:59:14 +0000</pubDate><guid>https://grafana.com/docs/tempo/v2.10.x/troubleshooting/send-traces/max-trace-limit-reached/</guid><content><![CDATA[&lt;h1 id=&#34;distributor-refusing-spans&#34;&gt;Distributor refusing spans&lt;/h1&gt;
&lt;p&gt;The two most likely causes of refused spans are unhealthy ingesters or trace limits being exceeded.&lt;/p&gt;
&lt;p&gt;To log spans that are discarded, add the &lt;code&gt;--distributor.log_discarded_spans.enabled&lt;/code&gt; flag to the distributor or
adjust the 
    &lt;a href=&#34;/docs/tempo/v2.10.x/configuration/#distributor&#34;&gt;distributor configuration&lt;/a&gt;:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;distributor:
  log_discarded_spans:
    enabled: true
    include_all_attributes: false # set to true for more verbose logs&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Adding the flag logs all discarded spans, as shown below:&lt;/p&gt;

&lt;div class=&#34;code-snippet code-snippet__mini&#34;&gt;&lt;div class=&#34;lang-toolbar__mini&#34;&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet code-snippet__border&#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-none&#34;&gt;level=info ts=2024-08-19T16:06:25.880684385Z caller=distributor.go:767 msg=discarded spanid=c2ebe710d2e2ce7a traceid=bd63605778e3dbe935b05e6afd291006
level=info ts=2024-08-19T16:06:25.881169385Z caller=distributor.go:767 msg=discarded spanid=5352b0cb176679c8 traceid=ba41cae5089c9284e18bca08fbf10ca2&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h2 id=&#34;unhealthy-ingesters&#34;&gt;Unhealthy ingesters&lt;/h2&gt;
&lt;p&gt;Unhealthy ingesters can be caused by failing OOMs or scale down events.
If you have unhealthy ingesters, your log line will look something like this:&lt;/p&gt;

&lt;div class=&#34;code-snippet code-snippet__mini&#34;&gt;&lt;div class=&#34;lang-toolbar__mini&#34;&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet code-snippet__border&#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-none&#34;&gt;msg=&amp;#34;pusher failed to consume trace data&amp;#34; err=&amp;#34;at least 2 live replicas required, could only find 1&amp;#34;&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In this case, you may need to visit the ingester 
    &lt;a href=&#34;/docs/tempo/v2.10.x/operations/manage-advanced-systems/consistent_hash_ring/&#34;&gt;ring page&lt;/a&gt; at &lt;code&gt;/ingester/ring&lt;/code&gt; on the Distributors
and &amp;ldquo;Forget&amp;rdquo; the unhealthy ingesters.
This works in the short term, but the long term fix is to stabilize your ingesters.&lt;/p&gt;
&lt;h2 id=&#34;trace-limits-reached&#34;&gt;Trace limits reached&lt;/h2&gt;
&lt;p&gt;In high volume tracing environments, the default trace limits are sometimes not sufficient.
These limits exist to protect Tempo and prevent it from OOMing, crashing, or otherwise allow tenants to not DOS each other.
If you are refusing spans due to limits, you&amp;rsquo;ll see logs like this at the distributor:&lt;/p&gt;

&lt;div class=&#34;code-snippet code-snippet__mini&#34;&gt;&lt;div class=&#34;lang-toolbar__mini&#34;&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet code-snippet__border&#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-none&#34;&gt;msg=&amp;#34;pusher failed to consume trace data&amp;#34; err=&amp;#34;rpc error: code = FailedPrecondition desc = TRACE_TOO_LARGE: max size of trace (52428800) exceeded while adding 15632 bytes to trace a0fbd6f9ac5e2077d90a19551dd67b6f for tenant single-tenant&amp;#34;
msg=&amp;#34;pusher failed to consume trace data&amp;#34; err=&amp;#34;rpc error: code = FailedPrecondition desc = LIVE_TRACES_EXCEEDED: max live traces per tenant exceeded: per-user traces limit (local: 60000 global: 0 actual local: 60000) exceeded&amp;#34;
msg=&amp;#34;pusher failed to consume trace data&amp;#34; err=&amp;#34;rpc error: code = ResourceExhausted desc = RATE_LIMITED: ingestion rate limit (15000000 bytes) exceeded while adding 10 bytes&amp;#34;&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You&amp;rsquo;ll also see the following metric incremented. The &lt;code&gt;reason&lt;/code&gt; label on this metric will contain information about the refused reason.&lt;/p&gt;

&lt;div class=&#34;code-snippet code-snippet__mini&#34;&gt;&lt;div class=&#34;lang-toolbar__mini&#34;&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet code-snippet__border&#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-none&#34;&gt;tempo_discarded_spans_total&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In this case, use available configuration options to 
    &lt;a href=&#34;/docs/tempo/v2.10.x/configuration/#ingestion-limits&#34;&gt;increase limits&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;client-resets-connection&#34;&gt;Client resets connection&lt;/h2&gt;
&lt;p&gt;When the client resets the connection before the distributor can consume the trace data, you see logs like this:&lt;/p&gt;

&lt;div class=&#34;code-snippet code-snippet__mini&#34;&gt;&lt;div class=&#34;lang-toolbar__mini&#34;&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet code-snippet__border&#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-none&#34;&gt;msg=&amp;#34;pusher failed to consume trace data&amp;#34; err=&amp;#34;context canceled&amp;#34;&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This issue needs to be fixed on the client side. To inspect which clients are causing the issue, logging discarded spans
with &lt;code&gt;include_all_attributes: true&lt;/code&gt; may help.&lt;/p&gt;
&lt;p&gt;Note that there may be other reasons for a closed context as well. Identifying the reason for a closed context is
not straightforward and may require additional debugging.&lt;/p&gt;
]]></content><description>&lt;h1 id="distributor-refusing-spans">Distributor refusing spans&lt;/h1>
&lt;p>The two most likely causes of refused spans are unhealthy ingesters or trace limits being exceeded.&lt;/p>
&lt;p>To log spans that are discarded, add the &lt;code>--distributor.log_discarded_spans.enabled&lt;/code> flag to the distributor or
adjust the
&lt;a href="/docs/tempo/v2.10.x/configuration/#distributor">distributor configuration&lt;/a>:&lt;/p></description></item><item><title>Troubleshoot Grafana Alloy</title><link>https://grafana.com/docs/tempo/v2.10.x/troubleshooting/send-traces/alloy/</link><pubDate>Thu, 09 Apr 2026 14:59:14 +0000</pubDate><guid>https://grafana.com/docs/tempo/v2.10.x/troubleshooting/send-traces/alloy/</guid><content><![CDATA[&lt;h1 id=&#34;troubleshoot-grafana-alloy&#34;&gt;Troubleshoot Grafana Alloy&lt;/h1&gt;
&lt;p&gt;Sometimes it can be difficult to tell what, if anything, Grafana Alloy is sending along to the backend.
This document focuses on a few techniques to gain visibility on how many trace spans push to Alloy and if they&amp;rsquo;re making it to the backend.
&lt;a href=&#34;https://github.com/open-telemetry/opentelemetry-collector&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;OpenTelemetry Collector&lt;/a&gt; form the basis of the tracing pipeline, which
does a fantastic job of logging network and other issues.&lt;/p&gt;
&lt;p&gt;If your logs are showing no obvious errors, one of the following suggestions may help.&lt;/p&gt;
&lt;h2 id=&#34;metrics&#34;&gt;Metrics&lt;/h2&gt;
&lt;p&gt;Alloy publishes a few Prometheus metrics that are useful to determine how much trace traffic it receives and successfully forwards.
These metrics are a good place to start when diagnosing tracing Alloy issues.&lt;/p&gt;
&lt;p&gt;From the 
    &lt;a href=&#34;/docs/alloy/v2.10.x/reference/components/otelcol/otelcol.receiver.otlp/&#34;&gt;&lt;code&gt;otelcol.receiver.otlp&lt;/code&gt;&lt;/a&gt; component:&lt;/p&gt;

&lt;div class=&#34;code-snippet code-snippet__mini&#34;&gt;&lt;div class=&#34;lang-toolbar__mini&#34;&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet code-snippet__border&#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-none&#34;&gt;receiver_accepted_spans_ratio_total
receiver_refused_spans_ratio_total&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;From the 
    &lt;a href=&#34;/docs/alloy/v2.10.x/reference/components/otelcol/otelcol.exporter.otlp/&#34;&gt;&lt;code&gt;otelcol.exporter.otlp&lt;/code&gt;&lt;/a&gt; component:&lt;/p&gt;

&lt;div class=&#34;code-snippet code-snippet__mini&#34;&gt;&lt;div class=&#34;lang-toolbar__mini&#34;&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet code-snippet__border&#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-none&#34;&gt;exporter_sent_spans_ratio_total
exporter_send_failed_spans_ratio_total&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Alloy has a Prometheus scrape endpoint, &lt;code&gt;/metrics&lt;/code&gt;, that you can use to check metrics locally by opening a browser to &lt;code&gt;http://localhost:12345/metrics&lt;/code&gt;.
The &lt;code&gt;/metrics&lt;/code&gt; HTTP endpoint of the Alloy HTTP server exposes the Alloy component and controller metrics.
Refer to the 
    &lt;a href=&#34;/docs/alloy/v2.10.x/troubleshoot/controller_metrics/&#34;&gt;Monitor the Grafana Alloy component controller&lt;/a&gt; documentation for more information.&lt;/p&gt;
&lt;h3 id=&#34;check-metrics-in-grafana-cloud&#34;&gt;Check metrics in Grafana Cloud&lt;/h3&gt;
&lt;p&gt;In your Grafana Cloud instance, you can check metrics using the &lt;code&gt;grafanacloud-usage&lt;/code&gt; data source.
To view the metrics, use the following steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;From your Grafana instance, select &lt;strong&gt;Explore&lt;/strong&gt; in the left menu.&lt;/li&gt;
&lt;li&gt;Change the data source to &lt;code&gt;grafanacloud-usage&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Type the metric to verify in the text box. If you start with &lt;code&gt;grafanacloud_traces_&lt;/code&gt;, you can  use autocomplete to browse the list of available metrics.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Refer to &lt;a href=&#34;/docs/grafana-cloud/cost-management-and-billing/understand-your-invoice/usage-limits/#cloud-traces-usage&#34;&gt;Cloud Traces usage metrics&lt;/a&gt; for a list of metrics related to tracing usage.&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;/media/docs/tempo/screenshot-tempo-trouble-metrics-search.png&#34;
  alt=&#34;Use Explore to check the metrics for traces sent to Grafana Cloud&#34; width=&#34;954&#34;
     height=&#34;463&#34;/&gt;&lt;/p&gt;
&lt;h2 id=&#34;trace-span-logging&#34;&gt;Trace span logging&lt;/h2&gt;
&lt;p&gt;If metrics and logs are looking good, but you are still unable to find traces in Grafana Cloud, you can configure Alloy to output all the traces it receives to the 
    &lt;a href=&#34;/docs/tempo/v2.10.x/configuration/grafana-alloy/automatic-logging/&#34;&gt;console&lt;/a&gt;.&lt;/p&gt;
]]></content><description>&lt;h1 id="troubleshoot-grafana-alloy">Troubleshoot Grafana Alloy&lt;/h1>
&lt;p>Sometimes it can be difficult to tell what, if anything, Grafana Alloy is sending along to the backend.
This document focuses on a few techniques to gain visibility on how many trace spans push to Alloy and if they&amp;rsquo;re making it to the backend.
&lt;a href="https://github.com/open-telemetry/opentelemetry-collector" target="_blank" rel="noopener noreferrer">OpenTelemetry Collector&lt;/a> form the basis of the tracing pipeline, which
does a fantastic job of logging network and other issues.&lt;/p></description></item></channel></rss>