<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Grafana Agent on Grafana Labs</title><link>https://grafana.com/docs/tempo/v2.3.x/configuration/grafana-agent/</link><description>Recent content in Grafana Agent on Grafana Labs</description><generator>Hugo -- gohugo.io</generator><language>en</language><atom:link href="/docs/tempo/v2.3.x/configuration/grafana-agent/index.xml" rel="self" type="application/rss+xml"/><item><title>Automatic logging: Trace discovery through logs</title><link>https://grafana.com/docs/tempo/v2.3.x/configuration/grafana-agent/automatic-logging/</link><pubDate>Wed, 11 Mar 2026 18:13:39 +0000</pubDate><guid>https://grafana.com/docs/tempo/v2.3.x/configuration/grafana-agent/automatic-logging/</guid><content><![CDATA[&lt;h1 id=&#34;automatic-logging-trace-discovery-through-logs&#34;&gt;Automatic logging: Trace discovery through logs&lt;/h1&gt;
&lt;p&gt;Running instrumented distributed systems is a very powerful way to gain
understanding over a system, but it brings its own challenges. One of them is
discovering which traces exist.&lt;/p&gt;
&lt;p&gt;In the beginning of Tempo, querying for a trace was only possible if you knew
the ID of the trace you were looking for. One solution was automatic logging.
Automatic logging provides an easy and fast way of discovering trace IDs
through log messages. Well-formatted log lines are written to a Loki instance
or to &lt;code&gt;stdout&lt;/code&gt; for each span, root, or process that passes through the tracing
pipeline. This allows for automatically building a mechanism for trace
discovery. On top of that, we also get metrics from traces using Loki, and
allow quickly jumping from a log message to the trace view in Grafana.&lt;/p&gt;
&lt;p&gt;While this approach is useful, it isn&amp;rsquo;t as powerful as &lt;a href=&#34;../../../traceql/&#34;&gt;TraceQL&lt;/a&gt;. If you are here because you know you want to log the
trace ID, to enable jumping from logs to traces, then read on!&lt;/p&gt;
&lt;p&gt;If you want to query the system directly, read the &lt;a href=&#34;../../../traceql/&#34;&gt;TraceQL
documentation&lt;/a&gt;.  We doubt you&amp;rsquo;ll
be sad.&lt;/p&gt;
&lt;h2 id=&#34;configuration&#34;&gt;Configuration&lt;/h2&gt;
&lt;p&gt;For high throughput systems, logging for every span may generate too much volume.
In such cases, logging per root span or process is recommended.&lt;/p&gt;
&lt;p align=&#34;center&#34;&gt;&lt;img src=&#34;../automatic-logging.png&#34; alt=&#34;Automatic logging overview&#34;&gt;&lt;/p&gt;
&lt;p&gt;Automatic logging searches for a given set of attributes in the spans and logs them as key-value pairs.
This allows searching by those key-value pairs in Loki.&lt;/p&gt;
&lt;h2 id=&#34;before-you-begin&#34;&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;To configure automatic logging, you need to select your preferred backend and the trace data to log.&lt;/p&gt;
&lt;p&gt;To see all the available config options, refer to the &lt;a href=&#34;/docs/agent/latest/configuration/traces-config/&#34;&gt;configuration reference&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This simple example logs trace roots to stdout and is a good way to get started using automatic logging:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;traces:
  configs:
  - name: default
    ...
    automatic_logging:
      backend: stdout
      roots: true&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This example pushes logs directly to a Loki instance also configured in the same Grafana Agent.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;traces:
  configs:
  - name: default
    ...
    automatic_logging:
      backend: logs_instance
      logs_instance_name: default
      roots: true&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p align=&#34;center&#34;&gt;&lt;img src=&#34;../automatic-logging-example-query.png&#34; alt=&#34;Automatic logging overview&#34;&gt;&lt;/p&gt;
&lt;p align=&#34;center&#34;&gt;&lt;img src=&#34;../automatic-logging-example-results.png&#34; alt=&#34;Automatic logging overview&#34;&gt;&lt;/p&gt;
]]></content><description>&lt;h1 id="automatic-logging-trace-discovery-through-logs">Automatic logging: Trace discovery through logs&lt;/h1>
&lt;p>Running instrumented distributed systems is a very powerful way to gain
understanding over a system, but it brings its own challenges. One of them is
discovering which traces exist.&lt;/p></description></item><item><title>Enable service graphs</title><link>https://grafana.com/docs/tempo/v2.3.x/configuration/grafana-agent/service-graphs/</link><pubDate>Wed, 11 Mar 2026 18:13:39 +0000</pubDate><guid>https://grafana.com/docs/tempo/v2.3.x/configuration/grafana-agent/service-graphs/</guid><content><![CDATA[&lt;h1 id=&#34;enable-service-graphs&#34;&gt;Enable service graphs&lt;/h1&gt;
&lt;p&gt;A service graph is a visual representation of the interrelationships between various services.
Service graphs help to understand the structure of a distributed system,
and the connections and dependencies between its components.&lt;/p&gt;
&lt;p&gt;The same service graph metrics can also be generated by Tempo.
This is more efficient and recommended for larger installations.
For a deep look into service graphs, visit &lt;a href=&#34;../../../metrics-generator/service_graphs/&#34;&gt;this section&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Service graphs are also used in the application performance management dashboard.
For more information, refer to the &lt;a href=&#34;../../../metrics-generator/service-graph-view/&#34;&gt;service graph view documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;before-you-begin&#34;&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;Service graphs are generated in the Grafana Agent and pushed to a Prometheus-compatible backend.
Once generated, they can be represented in Grafana as a graph.
You will need these components to fully use service graphs.&lt;/p&gt;
&lt;h3 id=&#34;enable-service-graphs-in-grafana-agent&#34;&gt;Enable service graphs in Grafana Agent&lt;/h3&gt;
&lt;p&gt;To start using service graphs, enable the feature in the Grafana Agent config.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;traces:
  configs:
    - name: default
      ...
      service_graphs:
        enabled: true&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;To see all the available config options, refer to the &lt;a href=&#34;/docs/agent/latest/configuration/traces-config/&#34;&gt;configuration reference&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Metrics are registered in the Agent&amp;rsquo;s default registerer.
Therefore, they are exposed at &lt;code&gt;/metrics&lt;/code&gt; in the Agent&amp;rsquo;s server port (default 12345).
One option is to use the Agent self-scrape capabilities to export the metrics to a Prometheus-compatible backend.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;metrics:
  configs:
    - name: default
      scrape_configs:
        - job_name: local_scrape
          static_configs:
            - targets: [&amp;#39;127.0.0.1:12345&amp;#39;]
      remote_write:
        - url: &amp;lt;remote_write&amp;gt;&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h3 id=&#34;grafana&#34;&gt;Grafana&lt;/h3&gt;
&lt;p&gt;The same service graph metrics can also be generated by Tempo.
This is more efficient and recommended for larger installations.&lt;/p&gt;
&lt;p&gt;For additional information about viewing service graph metrics in Grafana and calculating cardinality, refer to the &lt;a href=&#34;../../../metrics-generator/service_graphs/enable-service-graphs/&#34;&gt;server side documentation&lt;/a&gt;.&lt;/p&gt;
]]></content><description>&lt;h1 id="enable-service-graphs">Enable service graphs&lt;/h1>
&lt;p>A service graph is a visual representation of the interrelationships between various services.
Service graphs help to understand the structure of a distributed system,
and the connections and dependencies between its components.&lt;/p></description></item><item><title>Generate metrics from spans</title><link>https://grafana.com/docs/tempo/v2.3.x/configuration/grafana-agent/span-metrics/</link><pubDate>Wed, 11 Mar 2026 18:13:39 +0000</pubDate><guid>https://grafana.com/docs/tempo/v2.3.x/configuration/grafana-agent/span-metrics/</guid><content><![CDATA[&lt;h1 id=&#34;generate-metrics-from-spans&#34;&gt;Generate metrics from spans&lt;/h1&gt;
&lt;p&gt;Span metrics allow you to generate metrics from your tracing data automatically.
Span metrics aggregates request, error and duration (RED) metrics from span data.
Metrics are exported in Prometheus format.&lt;/p&gt;
&lt;p&gt;There are two options available for exporting metrics: using remote write to a Prometheus compatible backend or serving the metrics locally and scraping them.&lt;/p&gt;
&lt;p&gt;Span metrics generate two metrics: a counter that computes requests, and a histogram that computes operation’s durations.&lt;/p&gt;
&lt;p&gt;Span metrics are of particular interest if your system is not monitored with metrics,
but it has distributed tracing implemented.
You get out-of-the-box metrics from your tracing pipeline.&lt;/p&gt;
&lt;p&gt;Even if you already have metrics, span metrics can provide in-depth monitoring of your system.
The generated metrics show application-level insight into your monitoring,
as far as tracing gets propagated through your applications.&lt;/p&gt;
&lt;p&gt;Span metrics are also used in the service graph view.
For more information, refer to the &lt;a href=&#34;../../../metrics-generator/service-graph-view/&#34;&gt;service graph view&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;server-side-metrics&#34;&gt;Server-side metrics&lt;/h2&gt;
&lt;p&gt;The same span metrics can also be generated by Tempo.
This is more efficient and recommended for larger installations.
For a deep look into span metrics, visit &lt;a href=&#34;../../../metrics-generator/span_metrics/&#34;&gt;this section&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;example&#34;&gt;Example&lt;/h2&gt;
&lt;p align=&#34;center&#34;&gt;&lt;img src=&#34;../../../metrics-generator/span-metrics-example.png&#34; alt=&#34;Span metrics overview&#34;&gt;&lt;/p&gt;
]]></content><description>&lt;h1 id="generate-metrics-from-spans">Generate metrics from spans&lt;/h1>
&lt;p>Span metrics allow you to generate metrics from your tracing data automatically.
Span metrics aggregates request, error and duration (RED) metrics from span data.
Metrics are exported in Prometheus format.&lt;/p></description></item><item><title>Tail-based sampling</title><link>https://grafana.com/docs/tempo/v2.3.x/configuration/grafana-agent/tail-based-sampling/</link><pubDate>Wed, 11 Mar 2026 18:13:39 +0000</pubDate><guid>https://grafana.com/docs/tempo/v2.3.x/configuration/grafana-agent/tail-based-sampling/</guid><content><![CDATA[&lt;h1 id=&#34;tail-based-sampling&#34;&gt;Tail-based sampling&lt;/h1&gt;
&lt;p&gt;Tempo aims to provide an inexpensive solution that makes 100% sampling possible.
However, sometimes constraints make a lower sampling percentage necessary or desirable,
such as runtime or egress traffic related costs.
Probabilistic sampling strategies are easy to implement,
but also run the risk of discarding relevant data that you&amp;rsquo;ll later want.&lt;/p&gt;
&lt;p&gt;Tail-based sampling works with Grafana Agent in Flow or static modes.
Flow mode configuration files are &lt;a href=&#34;/docs/agent/latest/flow/config-language/&#34;&gt;written in River&lt;/a&gt;.
Static mode configuration files are &lt;a href=&#34;/docs/agent/latest/static/configuration/&#34;&gt;written in YAML&lt;/a&gt;.
Examples in this document are for Flow mode. You can also use the &lt;a href=&#34;/docs/agent/latest/operator/&#34;&gt;Static mode Kubernetes operator&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;how-tail-based-sampling-works&#34;&gt;How tail-based sampling works&lt;/h2&gt;
&lt;p&gt;In tail-based sampling, sampling decisions are made at the end of the workflow allowing for a more accurate sampling decision.
The Grafana Agent groups spans by trace ID and checks its data to see
if it meets one of the defined policies (for example, &lt;code&gt;latency&lt;/code&gt; or &lt;code&gt;status_code&lt;/code&gt;).
For instance, a policy can check if a trace contains an error or if it took
longer than a certain duration.&lt;/p&gt;
&lt;p&gt;A trace is sampled if it meets at least one policy.&lt;/p&gt;
&lt;p&gt;To group spans by trace ID, the Agent buffers spans for a configurable amount of time,
after which it considers the trace complete.
Longer running traces are split into more than one.
However, waiting longer times increases the memory overhead of buffering.&lt;/p&gt;
&lt;p&gt;One particular challenge of grouping trace data is for multi-instance Agent deployments,
where spans that belong to the same trace can arrive to different Agents.
To solve that, you can configure the Agent to load balance traces across agent instances
by exporting spans belonging to the same trace to the same instance.&lt;/p&gt;
&lt;p&gt;This is achieved by redistributing spans by trace ID once they arrive from the application.
The Agent must be able to discover and connect to other Agent instances where spans for the same trace can arrive.
Kubernetes users should use a &lt;a href=&#34;https://kubernetes.io/docs/concepts/services-networking/service/#headless-services&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;headless service&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Redistributing spans by trace ID means that spans are sent and received twice,
which can cause a significant increase in CPU usage.
This overhead increases with the number of Agent instances that share the same traces.&lt;/p&gt;
&lt;p align=&#34;center&#34;&gt;&lt;img src=&#34;../tail-based-sampling.png&#34; alt=&#34;Tail-based sampling overview&#34;&gt;&lt;/p&gt;
&lt;h2 id=&#34;quickstart&#34;&gt;Quickstart&lt;/h2&gt;
&lt;p&gt;To start using tail-based sampling, define a sampling policy.
If you&amp;rsquo;re using a multi-instance deployment of the agent,
add load balancing and specify the resolving mechanism to find other Agents in the setup.
To see all the available configuration options, refer to the &lt;a href=&#34;/docs/agent/latest/configuration/traces-config/&#34;&gt;configuration reference&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;example-for-grafana-agent-flow&#34;&gt;Example for Grafana Agent Flow&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;/docs/agent/latest/flow/&#34;&gt;Grafana Agent Flow&lt;/a&gt; is a component-based revision of Grafana Agent with a focus on ease-of-use, debuggability, and ability to adapt to the needs of power users.
Flow configuration files are written in River instead of YAML.&lt;/p&gt;
&lt;p&gt;Grafana Agent Flow uses the &lt;a href=&#34;/docs/agent/latest/flow/reference/components/otelcol.processor.tail_sampling/&#34;&gt;&lt;code&gt;otelcol.processor.tail_sampling component&lt;/code&gt;&lt;/a&gt;` for tail-based sampling.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;Alloy&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-alloy&#34;&gt;otelcol.receiver.otlp &amp;#34;otlp_receiver&amp;#34; {
    grpc {
        endpoint = &amp;#34;0.0.0.0:4317&amp;#34;
    }

    output {
        traces = [
            otelcol.processor.tail_sampling.policies.input,
        ]
    }
}

otelcol.exporter.otlp &amp;#34;tempo&amp;#34; {
    client {
        endpoint = &amp;#34;tempo:4317&amp;#34;
    }
}

// The Tail Sampling processor will use a set of policies to determine which received
// traces to keep and send to Tempo.
otelcol.processor.tail_sampling &amp;#34;policies&amp;#34; {
    // Total wait time from the start of a trace before making a sampling decision.
    // Note that smaller time periods can potentially cause a decision to be made
    // before the end of a trace has occurred.
    decision_wait = &amp;#34;30s&amp;#34;

    // The following policies follow a logical OR pattern, meaning that if any of the
    // policies match, the trace will be kept. For logical AND, you can use the `and`
    // policy. Every span of a trace is examined by each policy in turn. A match will
    // cause a short-circuit.

    // This policy defines that traces that contain errors should be kept.
    policy {
        // The name of the policy can be used for logging purposes.
        name = &amp;#34;sample-erroring-traces&amp;#34;
        // The type must match the type of policy to be used, in this case examining
        // the status code of every span in the trace.
        type = &amp;#34;status_code&amp;#34;
        // This block determines the error codes that should match in order to keep
        // the trace, in this case the OpenTelemetry &amp;#39;ERROR&amp;#39; code.
        status_code {
            status_codes = [ &amp;#34;ERROR&amp;#34; ]
        }
    }

    // This policy defines that only traces that are longer than 200ms in total
    // should be kept.
    policy {
        // The name of the policy can be used for logging purposes.
        name = &amp;#34;sample-long-traces&amp;#34;
        // The type must match the policy to be used, in this case the total latency
        // of the trace.
        type = &amp;#34;latency&amp;#34;
        // This block determines the total length of the trace in milliseconds.
        latency {
            threshold_ms = 200
        }
    }

    // The output block forwards the kept traces onto the batch processor, which
    // will marshall them for exporting to Tempo.
    output {
        traces = [otelcol.exporter.otlp.tempo.input]
    }
}&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h2 id=&#34;examples-for-grafana-agent-static-mode&#34;&gt;Examples for Grafana Agent static mode&lt;/h2&gt;
&lt;p&gt;For additional information, refer to the blog post, &lt;a href=&#34;/blog/2022/05/11/an-introduction-to-trace-sampling-with-grafana-tempo-and-grafana-agent&#34;&gt;An introduction to trace sampling with Grafana Tempo and Grafana Agent&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;status-code-tail-sampling-policy&#34;&gt;Status code tail sampling policy&lt;/h3&gt;
&lt;p&gt;The following policy only samples traces where at least one span contains an OpenTelemetry Error status code.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;traces:
  configs:
    - name: default
    ...
    tail_sampling:
      policies:
        - type: status_code
          status_code:
            status_codes:
              - ERROR&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h3 id=&#34;span-attribute-tail-sampling-policy&#34;&gt;Span attribute tail sampling policy&lt;/h3&gt;
&lt;p&gt;The following policy only samples traces where the span attribute &lt;code&gt;http.target&lt;/code&gt; does &lt;em&gt;not&lt;/em&gt; contain the value &lt;code&gt;/healthcheck&lt;/code&gt; or is prefixed with &lt;code&gt;/metrics/&lt;/code&gt;.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;traces:
   configs:
   - name: default
    tail_sampling:
      policies:
        - type: string_attribute
          string_attribute:
            key: http.target
            values:
              - ^\/(?:metrics\/.*|healthcheck)$
            enabled_regex_matching: true
            invert_match: true&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h3 id=&#34;and-compound-tail-sampling-policy&#34;&gt;And compound tail sampling policy&lt;/h3&gt;
&lt;p&gt;The following policy will only sample traces where all of the conditions for the sub-policies are met. In this case, it takes the prior two policies and will only sample traces where the span attribute &lt;code&gt;http.target&lt;/code&gt; does &lt;em&gt;not&lt;/em&gt; contain the value &lt;code&gt;/healthcheck&lt;/code&gt; or is prefixed with &lt;code&gt;/metrics/&lt;/code&gt; &lt;em&gt;and&lt;/em&gt; at least one of the spans of the trace contains an OpenTelemetry Error status code.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;traces:
   configs:
   - name: default
    tail_sampling:
      policies:
       - type: and
            and_sub_policy:
            - name: and_tag_policy
              type: string_attribute
              string_attribute:
                key: http.target
                values:
                    - ^\/(?:metrics\/.*|healthcheck)$
                enabled_regex_matching: true
                invert_match: true
            - name: and_error_policy
              type: status_code
              status_code:
                status_codes:
                  - ERROR&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
]]></content><description>&lt;h1 id="tail-based-sampling">Tail-based sampling&lt;/h1>
&lt;p>Tempo aims to provide an inexpensive solution that makes 100% sampling possible.
However, sometimes constraints make a lower sampling percentage necessary or desirable,
such as runtime or egress traffic related costs.
Probabilistic sampling strategies are easy to implement,
but also run the risk of discarding relevant data that you&amp;rsquo;ll later want.&lt;/p></description></item></channel></rss>