<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Metrics from traces on Grafana Labs</title><link>https://grafana.com/docs/tempo/v2.10.x/metrics-from-traces/</link><description>Recent content in Metrics from traces on Grafana Labs</description><generator>Hugo -- gohugo.io</generator><language>en</language><atom:link href="/docs/tempo/v2.10.x/metrics-from-traces/index.xml" rel="self" type="application/rss+xml"/><item><title>TraceQL metrics</title><link>https://grafana.com/docs/tempo/v2.10.x/metrics-from-traces/metrics-queries/</link><pubDate>Thu, 09 Apr 2026 14:59:14 +0000</pubDate><guid>https://grafana.com/docs/tempo/v2.10.x/metrics-from-traces/metrics-queries/</guid><content><![CDATA[&lt;h1 id=&#34;traceql-metrics&#34;&gt;TraceQL metrics&lt;/h1&gt;
&lt;!-- Using a custom admonition because no feature flag is required. --&gt;


&lt;div data-shared=&#34;traceql-metrics-admonition.md&#34;&gt;
            &lt;!-- Using a custom admonition because no feature flag is required. --&gt;


&lt;div class=&#34;admonition admonition-caution&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Caution&lt;/p&gt;&lt;p&gt;TraceQL metrics is an &lt;a href=&#34;/docs/release-life-cycle/&#34;&gt;public preview feature&lt;/a&gt;. Grafana Labs offers limited support, and breaking changes might occur prior to the feature being made generally available
TraceQL metrics are enabled by default in Grafana Cloud.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;/div&gt;

        
&lt;p&gt;TraceQL metrics is a feature in Grafana Tempo that creates metrics from traces.&lt;/p&gt;
&lt;p&gt;Metric queries extend trace queries by applying a function to trace query results.
This powerful feature allows for ad hoc aggregation of any existing TraceQL query by any dimension available in your
traces, much in the same way that LogQL metric queries create metrics from logs.&lt;/p&gt;
&lt;p&gt;Traces are a unique observability signal that contain causal relationships between the components in your system.&lt;/p&gt;
&lt;p&gt;TraceQL metrics can help answer questions like this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;How many database calls across all systems are downstream of your application?&lt;/li&gt;
&lt;li&gt;What services beneath a given endpoint are failing?&lt;/li&gt;
&lt;li&gt;What services beneath an endpoint are slow?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;TraceQL metrics can help you answer these questions by parsing your traces in aggregate.&lt;/p&gt;
&lt;p&gt;TraceQL metrics are powered by
the 
    &lt;a href=&#34;/docs/tempo/v2.10.x/api_docs/#traceql-metrics&#34;&gt;TraceQL metrics API&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;/media/docs/tempo/metrics-explore-sapmle-v2.7.png&#34;
  alt=&#34;Metrics visualization in Grafana&#34; width=&#34;1271&#34;
     height=&#34;899&#34;/&gt;&lt;/p&gt;
&lt;h2 id=&#34;red-metrics-traceql-and-promql&#34;&gt;RED metrics, TraceQL, and PromQL&lt;/h2&gt;
&lt;p&gt;RED is an acronym for three types of metrics:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Rate, the number of requests per second&lt;/li&gt;
&lt;li&gt;Errors, the number of those requests that are failing&lt;/li&gt;
&lt;li&gt;Duration, the amount of time those requests take&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For more information about the RED method, refer
to &lt;a href=&#34;/blog/2018/08/02/the-red-method-how-to-instrument-your-services/&#34;&gt;The RED Method: how to instrument your services&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You can write TraceQL metrics queries to compute rate, errors, and durations over different groups of spans.&lt;/p&gt;
&lt;p&gt;For more information on how to use TraceQL metrics to investigate issues, refer
to 
    &lt;a href=&#34;/docs/tempo/v2.10.x/solutions-with-traces/solve-problems-metrics-queries/&#34;&gt;Solve problems with metrics queries&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;enable-and-use-traceql-metrics&#34;&gt;Enable and use TraceQL metrics&lt;/h2&gt;
&lt;p&gt;To use TraceQL metrics, you need to enable them on your Tempo database.
Refer to 
    &lt;a href=&#34;/docs/tempo/v2.10.x/operations/traceql-metrics/&#34;&gt;Configure TraceQL metrics&lt;/a&gt; for
more information.&lt;/p&gt;
&lt;p&gt;From there, you can either query the TraceQL metrics API directly (for example, with &lt;code&gt;curl&lt;/code&gt;) or using Grafana
(recommended).
To run TraceQL metrics queries in Grafana, you need Grafana Cloud or Grafana 10.4 or later.
No extra configuration is needed.
Use a Tempo data source that points to a Tempo database with TraceQL metrics enabled.&lt;/p&gt;
&lt;p&gt;Refer to 
    &lt;a href=&#34;/docs/tempo/v2.10.x/solutions-with-traces/solve-problems-metrics-queries/&#34;&gt;Solve problems using metrics queries&lt;/a&gt; for some real-world examples.&lt;/p&gt;
&lt;h3 id=&#34;functions&#34;&gt;Functions&lt;/h3&gt;
&lt;p&gt;TraceQL metrics queries currently include the following functions for aggregating over groups of spans: &lt;code&gt;rate&lt;/code&gt;,
&lt;code&gt;count_over_time&lt;/code&gt;, &lt;code&gt;sum_over_time&lt;/code&gt;, &lt;code&gt;max_over_time&lt;/code&gt;, &lt;code&gt;min_over_time&lt;/code&gt;, &lt;code&gt;avg_over_time&lt;/code&gt;, &lt;code&gt;quantile_over_time&lt;/code&gt;,
&lt;code&gt;histogram_over_time&lt;/code&gt;, and &lt;code&gt;compare&lt;/code&gt;.
These functions can be added as an operator at the end of any TraceQL query.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;topk&lt;/code&gt; and &lt;code&gt;bottomk&lt;/code&gt; functions are supported on TraceQL metrics functions.&lt;/p&gt;
&lt;p&gt;For detailed information and example queries for each function, refer to 
    &lt;a href=&#34;/docs/tempo/v2.10.x/metrics-from-traces/metrics-queries/functions/&#34;&gt;TraceQL metrics functions&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;exemplars&#34;&gt;Exemplars&lt;/h3&gt;
&lt;p&gt;Exemplars are a powerful feature of TraceQL metrics.
They allow you to see an exact trace that contributed to a given metric value.
This is particularly useful when you want to understand why a given metric is high or low.&lt;/p&gt;
&lt;p&gt;Exemplars are available in TraceQL metrics for all range queries.
To get exemplars, you need to configure it in the query-frontend with the parameter
&lt;code&gt;query_frontend.metrics.max_exemplars&lt;/code&gt;,
or pass a query hint in your query.&lt;/p&gt;
&lt;p&gt;Example:&lt;/p&gt;

&lt;div class=&#34;code-snippet code-snippet__mini&#34;&gt;&lt;div class=&#34;lang-toolbar__mini&#34;&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet code-snippet__border&#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-none&#34;&gt;{ span:name = &amp;#34;GET /:endpoint&amp;#34; } | quantile_over_time(duration, .99) by (span.http.target) with (exemplars=true)&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
]]></content><description>&lt;h1 id="traceql-metrics">TraceQL metrics&lt;/h1>
&lt;!-- Using a custom admonition because no feature flag is required. -->
&lt;div data-shared="traceql-metrics-admonition.md">
&lt;!-- Using a custom admonition because no feature flag is required. -->
&lt;div class="admonition admonition-caution">&lt;blockquote>&lt;p class="title text-uppercase">Caution&lt;/p></description></item><item><title>Metrics-generator</title><link>https://grafana.com/docs/tempo/v2.10.x/metrics-from-traces/metrics-generator/</link><pubDate>Thu, 09 Apr 2026 14:59:14 +0000</pubDate><guid>https://grafana.com/docs/tempo/v2.10.x/metrics-from-traces/metrics-generator/</guid><content><![CDATA[&lt;h1 id=&#34;metrics-generator&#34;&gt;Metrics-generator&lt;/h1&gt;
&lt;p&gt;Metrics-generator is an optional Tempo component that derives metrics from ingested traces.
If present, the distributor writes received spans to both the ingester and the metrics-generator.
The metrics-generator processes spans and writes metrics to a Prometheus data source using the Prometheus remote write protocol.&lt;/p&gt;
&lt;h2 id=&#34;architecture&#34;&gt;Architecture&lt;/h2&gt;
&lt;p&gt;Metrics-generator leverages the data available in the ingest path in Tempo to provide additional value by generating metrics from traces.&lt;/p&gt;
&lt;p&gt;The metrics-generator internally runs a set of &lt;strong&gt;processors&lt;/strong&gt;.
Each processor ingests spans and produces metrics.
Every processor derives different metrics. Currently, the following processors are available:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Service graphs&lt;/li&gt;
&lt;li&gt;Span metrics&lt;/li&gt;
&lt;li&gt;Local blocks&lt;/li&gt;
&lt;/ul&gt;
&lt;p align=&#34;center&#34;&gt;&lt;img src=&#34;tempo-metrics-gen-overview.svg&#34; alt=&#34;Service metrics architecture&#34;&gt;&lt;/p&gt;
&lt;h3 id=&#34;service-graphs&#34;&gt;Service graphs&lt;/h3&gt;
&lt;p&gt;Service graphs are the representations of the relationships between services within a distributed system.&lt;/p&gt;
&lt;p&gt;This service graphs processor builds a map of services by analyzing traces, with the objective to find &lt;em&gt;edges&lt;/em&gt;.
Edges are spans with a parent-child relationship, that represent a jump (for example, a request) between two services.
The amount of request and their duration are recorded as metrics, which are used to represent the graph.&lt;/p&gt;
&lt;p&gt;To learn more about this processor, refer to the 
    &lt;a href=&#34;/docs/tempo/v2.10.x/metrics-from-traces/service_graphs/&#34;&gt;service graph&lt;/a&gt; documentation.&lt;/p&gt;
&lt;h3 id=&#34;span-metrics&#34;&gt;Span metrics&lt;/h3&gt;
&lt;p&gt;The span metrics processor derives RED (Request, Error, and Duration) metrics from spans.&lt;/p&gt;
&lt;p&gt;The span metrics processor computes the total count and the duration of spans for every unique combination of dimensions.
Dimensions can be the service name, the operation, the span kind, the status code and any tag or attribute present in the span.
The more dimensions are enabled, the higher the cardinality of the generated metrics.&lt;/p&gt;
&lt;p&gt;To learn more about this processor, refer to the 
    &lt;a href=&#34;/docs/tempo/v2.10.x/metrics-from-traces/span-metrics/&#34;&gt;span metrics&lt;/a&gt; documentation.&lt;/p&gt;
&lt;h3 id=&#34;local-blocks&#34;&gt;Local blocks&lt;/h3&gt;
&lt;p&gt;The local blocks processor stores spans for a set period of time and
enables more complex APIs to perform calculations on the data. The processor must be
enabled for certain metrics APIs to function.&lt;/p&gt;
&lt;h2 id=&#34;remote-writing-metrics&#34;&gt;Remote writing metrics&lt;/h2&gt;
&lt;p&gt;The metrics-generator runs a Prometheus Agent that periodically sends metrics to a &lt;code&gt;remote_write&lt;/code&gt; endpoint.
The &lt;code&gt;remote_write&lt;/code&gt; endpoint is configurable and can be any &lt;a href=&#34;https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Prometheus-compatible endpoint&lt;/a&gt;.
To learn more about the endpoint configuration, refer to the 
    &lt;a href=&#34;/docs/tempo/v2.10.x/configuration/#metrics-generator&#34;&gt;Metrics-generator&lt;/a&gt; section of the Tempo Configuration documentation.
Writing interval can be controlled via &lt;code&gt;metrics_generator.registry.collection_interval&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;When multi-tenancy is enabled, the metrics-generator forwards the &lt;code&gt;X-Scope-OrgID&lt;/code&gt; header of the original request to the &lt;code&gt;remote_write&lt;/code&gt; endpoint. This feature can be disabled by setting &lt;code&gt;remote_write_add_org_id_header&lt;/code&gt; to false.&lt;/p&gt;
&lt;h2 id=&#34;native-histograms&#34;&gt;Native histograms&lt;/h2&gt;
&lt;p&gt;
    &lt;a href=&#34;/docs/mimir/v2.10.x/visualize/native-histograms/&#34;&gt;Native histograms&lt;/a&gt; are a data type in Prometheus that can produce, store, and query high-resolution histograms of observations.
It usually offers higher resolution and more straightforward instrumentation than classic histograms.&lt;/p&gt;
&lt;p&gt;The metrics-generator supports the ability to produce native histograms for
high-resolution data. Users must 
    &lt;a href=&#34;/docs/mimir/v2.10.x/configure/configure-native-histograms-ingestion/&#34;&gt;update the receiving endpoint&lt;/a&gt; to ingest native
histograms, and 
    &lt;a href=&#34;/docs/mimir/v2.10.x/visualize/native-histograms/&#34;&gt;update histogram queries&lt;/a&gt; in their dashboards.&lt;/p&gt;
&lt;p&gt;To learn more about the configuration, refer to the 
    &lt;a href=&#34;/docs/tempo/v2.10.x/configuration/#metrics-generator&#34;&gt;Metrics-generator&lt;/a&gt; section of the Tempo Configuration documentation.&lt;/p&gt;
&lt;h2 id=&#34;use-metrics-generator-in-grafana-cloud&#34;&gt;Use metrics-generator in Grafana Cloud&lt;/h2&gt;
&lt;p&gt;If you want to enable metrics-generator for your Grafana Cloud account, refer to the &lt;a href=&#34;/docs/grafana-cloud/send-data/traces/metrics-generator/&#34;&gt;Metrics-generator in Grafana Cloud&lt;/a&gt; documentation.&lt;/p&gt;
&lt;p&gt;Enabling metrics generation and remote writing them to Grafana Cloud Metrics produces extra active series that could impact your billing.
For more information on billing, refer to &lt;a href=&#34;/docs/grafana-cloud/cost-management-and-billing/understand-your-invoice/&#34;&gt;Understand your invoice&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;multitenancy&#34;&gt;Multitenancy&lt;/h2&gt;
&lt;p&gt;Tempo supports multitenancy in the metrics-generator through the use of environment variables and per-tenant overrides.
Refer to the &lt;a href=&#34;multitenancy/&#34;&gt;Multitenant Support for Metrics-Generator&lt;/a&gt; documentation for more information.&lt;/p&gt;
]]></content><description>&lt;h1 id="metrics-generator">Metrics-generator&lt;/h1>
&lt;p>Metrics-generator is an optional Tempo component that derives metrics from ingested traces.
If present, the distributor writes received spans to both the ingester and the metrics-generator.
The metrics-generator processes spans and writes metrics to a Prometheus data source using the Prometheus remote write protocol.&lt;/p></description></item><item><title>Span metrics</title><link>https://grafana.com/docs/tempo/v2.10.x/metrics-from-traces/span-metrics/</link><pubDate>Thu, 09 Apr 2026 14:59:14 +0000</pubDate><guid>https://grafana.com/docs/tempo/v2.10.x/metrics-from-traces/span-metrics/</guid><content><![CDATA[&lt;h1 id=&#34;span-metrics&#34;&gt;Span metrics&lt;/h1&gt;
&lt;p&gt;Span metrics are generated from traces and can be used to create service graphs.
You can create span metrics by enabling the feature in metrics-generator or using Grafana Alloy.&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;
    &lt;a href=&#34;/docs/tempo/v2.10.x/metrics-from-traces/span-metrics/span-metrics-metrics-generator/&#34;&gt;Use the span metrics processor&lt;/a&gt;&lt;br&gt;The span metrics processor generates metrics from ingested tracing data, including request, error, and duration (RED) metrics.&lt;/li&gt;&lt;li&gt;
    &lt;a href=&#34;/docs/tempo/v2.10.x/metrics-from-traces/span-metrics/span-metrics-alloy/&#34;&gt;Use Alloy to generate span metrics from spans&lt;/a&gt;&lt;br&gt;Span metrics allow you to generate metrics from your tracing data automatically.&lt;/li&gt;&lt;/ul&gt;
]]></content><description>&lt;h1 id="span-metrics">Span metrics&lt;/h1>
&lt;p>Span metrics are generated from traces and can be used to create service graphs.
You can create span metrics by enabling the feature in metrics-generator or using Grafana Alloy.&lt;/p></description></item><item><title>Service graphs</title><link>https://grafana.com/docs/tempo/v2.10.x/metrics-from-traces/service_graphs/</link><pubDate>Thu, 09 Apr 2026 14:59:14 +0000</pubDate><guid>https://grafana.com/docs/tempo/v2.10.x/metrics-from-traces/service_graphs/</guid><content><![CDATA[&lt;h1 id=&#34;service-graphs&#34;&gt;Service graphs&lt;/h1&gt;
&lt;p&gt;A service graph is a visual representation of the interrelationships between various services.
Service graphs help you to understand the structure of a distributed system,
and the connections and dependencies between its components:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Infer the topology of a distributed system.&lt;/strong&gt;
As distributed systems grow, they become more complex.
Service graphs help you to understand the structure of the system.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Provide a high-level overview of the health of your system.&lt;/strong&gt;
Service graphs display error rates, latencies, as well as other relevant data.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Provide an historic view of a system’s topology.&lt;/strong&gt;
Distributed systems change very frequently,
and service graphs offer a way of seeing how these systems have evolved over time.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Service graphs can be generated from metrics created by the metrics-generator or Grafana Alloy.
Refer to 
    &lt;a href=&#34;/docs/tempo/v2.10.x/metrics-from-traces/service_graphs/enable-service-graphs/&#34;&gt;Enable service graphs&lt;/a&gt; for more information on how to enable service graphs in Tempo.&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;/media/docs/grafana/data-sources/tempo/query-editor/tempo-ds-query-service-graph-prom.png&#34;
  alt=&#34;Service graph&#34; width=&#34;1396&#34;
     height=&#34;1075&#34;/&gt;&lt;/p&gt;
&lt;h2 id=&#34;how-they-work&#34;&gt;How they work&lt;/h2&gt;
&lt;p&gt;The metrics-generator and Grafana Alloy both process traces and generate service graphs in the form of Prometheus metrics.&lt;/p&gt;
&lt;p&gt;Service graphs work by inspecting traces and looking for spans with parent-children relationship that represent a request.
The processor uses the &lt;a href=&#34;https://github.com/open-telemetry/semantic-conventions/blob/main/docs/general/trace.md&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;OpenTelemetry semantic conventions&lt;/a&gt; to detect a myriad of requests.&lt;/p&gt;
&lt;p&gt;It supports the following requests:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A direct request between two services where the outgoing and the incoming span must have &lt;a href=&#34;https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/api.md#spankind&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;&lt;code&gt;span.kind&lt;/code&gt;&lt;/a&gt;, &lt;code&gt;client&lt;/code&gt;, and &lt;code&gt;server&lt;/code&gt;, respectively.&lt;/li&gt;
&lt;li&gt;A request across a messaging system where the outgoing and the incoming span must have &lt;code&gt;span.kind&lt;/code&gt;, &lt;code&gt;producer&lt;/code&gt;, and &lt;code&gt;consumer&lt;/code&gt; respectively.&lt;/li&gt;
&lt;li&gt;A database request; in this case the processor looks for spans containing attributes &lt;code&gt;span.kind&lt;/code&gt;=&lt;code&gt;client&lt;/code&gt; as well as one of &lt;code&gt;db.namespace&lt;/code&gt;, &lt;code&gt;db.name&lt;/code&gt; or &lt;code&gt;db.system&lt;/code&gt;. See below for how the name of the node is determined for a database request.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The processor keeps every span that can form a request pair in an in-memory store until the corresponding pair span arrives or the maximum waiting time passes.
When either condition occurs, the processor records the request and removes it from the local store.&lt;/p&gt;
&lt;p&gt;Each emitted metrics series have the &lt;code&gt;client&lt;/code&gt; and &lt;code&gt;server&lt;/code&gt; label corresponding with the service doing the request and the service receiving the request.&lt;/p&gt;

&lt;div class=&#34;code-snippet code-snippet__mini&#34;&gt;&lt;div class=&#34;lang-toolbar__mini&#34;&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet code-snippet__border&#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-none&#34;&gt;  traces_service_graph_request_total{client=&amp;#34;app&amp;#34;, server=&amp;#34;db&amp;#34;, connection_type=&amp;#34;database&amp;#34;} 20&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h3 id=&#34;virtual-nodes&#34;&gt;Virtual nodes&lt;/h3&gt;
&lt;p&gt;Virtual nodes are nodes that form part of the lifecycle of a trace,
but spans for them aren&amp;rsquo;t collected because they&amp;rsquo;re outside the user&amp;rsquo;s reach or aren&amp;rsquo;t instrumented.
For example, you might not collect spans for an external service for payment processing that&amp;rsquo;s outside user interaction.&lt;/p&gt;
&lt;p&gt;The processor detects virtual nodes in two ways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Uninstrumented client (missing client span):&lt;/strong&gt; The root span has &lt;code&gt;span.kind&lt;/code&gt; set to &lt;code&gt;server&lt;/code&gt; or &lt;code&gt;consumer&lt;/code&gt;, with no matching client span. This indicates that the request or message was initiated by an external system that isn&amp;rsquo;t instrumented, like a scheduler, a frontend application, or an engineer using &lt;code&gt;curl&lt;/code&gt;.
&lt;ul&gt;
&lt;li&gt;In the Tempo metrics-generator, the processor checks the configured &lt;code&gt;peer_attributes&lt;/code&gt; on the server span first. If it finds a matching attribute, it uses that value as the client node name. Otherwise, the client node name defaults to &lt;code&gt;user&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;In Grafana Alloy and the OpenTelemetry Collector &lt;code&gt;servicegraph&lt;/code&gt; connector, the connector doesn&amp;rsquo;t evaluate peer attributes for this case. The client node name always defaults to &lt;code&gt;user&lt;/code&gt; and you can&amp;rsquo;t override it. An &lt;a href=&#34;https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/45397&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;upstream feature request&lt;/a&gt; exists to add this capability.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Uninstrumented server (missing server span):&lt;/strong&gt; A &lt;code&gt;client&lt;/code&gt; span doesn&amp;rsquo;t have its matching &lt;code&gt;server&lt;/code&gt; span, but has a peer attribute present. In this case, the client called an external service that doesn&amp;rsquo;t send spans. The processor uses the peer attribute value as the virtual server node name.
&lt;ul&gt;
&lt;li&gt;The default peer attributes are &lt;code&gt;peer.service&lt;/code&gt;, &lt;code&gt;db.name&lt;/code&gt;, and &lt;code&gt;db.system&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The processor searches the attributes in order and uses the first match as the virtual node name.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The processor identifies a database node when the span has at least one &lt;code&gt;db.namespace&lt;/code&gt;, &lt;code&gt;db.name&lt;/code&gt;, or &lt;code&gt;db.system&lt;/code&gt; attribute.&lt;/p&gt;
&lt;p&gt;The processor determines the database node name using the following span attributes in order of precedence: &lt;code&gt;peer.service&lt;/code&gt;, &lt;code&gt;server.address&lt;/code&gt;, &lt;code&gt;network.peer.address:network.peer.port&lt;/code&gt;, &lt;code&gt;db.namespace&lt;/code&gt;, &lt;code&gt;db.name&lt;/code&gt;.&lt;/p&gt;
&lt;h3 id=&#34;metrics&#34;&gt;Metrics&lt;/h3&gt;
&lt;p&gt;The following metrics are exported:&lt;/p&gt;
&lt;!-- vale Grafana.Spelling = NO --&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Metric&lt;/th&gt;
              &lt;th&gt;Type&lt;/th&gt;
              &lt;th&gt;Labels&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;traces_service_graph_request_total&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Counter&lt;/td&gt;
              &lt;td&gt;client, server, connection_type&lt;/td&gt;
              &lt;td&gt;Total count of requests between two nodes&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;traces_service_graph_request_failed_total&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Counter&lt;/td&gt;
              &lt;td&gt;client, server, connection_type&lt;/td&gt;
              &lt;td&gt;Total count of failed requests between two nodes&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;traces_service_graph_request_server_seconds&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Histogram&lt;/td&gt;
              &lt;td&gt;client, server, connection_type&lt;/td&gt;
              &lt;td&gt;Time for a request between two nodes as seen from the server&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;traces_service_graph_request_client_seconds&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Histogram&lt;/td&gt;
              &lt;td&gt;client, server, connection_type&lt;/td&gt;
              &lt;td&gt;Time for a request between two nodes as seen from the client&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;traces_service_graph_request_messaging_system_seconds&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Histogram&lt;/td&gt;
              &lt;td&gt;client, server, connection_type&lt;/td&gt;
              &lt;td&gt;(Off by default) Time between publisher and consumer for services communicating through a messaging system&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;traces_service_graph_unpaired_spans_total&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Counter&lt;/td&gt;
              &lt;td&gt;client, server, connection_type&lt;/td&gt;
              &lt;td&gt;Total count of unpaired spans&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;traces_service_graph_dropped_spans_total&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Counter&lt;/td&gt;
              &lt;td&gt;client, server, connection_type&lt;/td&gt;
              &lt;td&gt;Total count of dropped spans&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;!-- vale Grafana.Spelling = YES --&gt;
&lt;p&gt;The processor measures duration from both the client and server sides.&lt;/p&gt;
&lt;p&gt;Possible values for &lt;code&gt;connection_type&lt;/code&gt;: unset, &lt;code&gt;virtual_node&lt;/code&gt;, &lt;code&gt;messaging_system&lt;/code&gt;, or &lt;code&gt;database&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;You can include additional labels using the &lt;code&gt;dimensions&lt;/code&gt; configuration option or the &lt;code&gt;enable_virtual_node_label&lt;/code&gt; option.&lt;/p&gt;
&lt;p&gt;Since the service graph processor has to process both sides of an edge,
it needs to process all spans of a trace to function properly.
If spans of a trace spread across multiple instances, the processor can&amp;rsquo;t pair them reliably.&lt;/p&gt;
&lt;h4 id=&#34;activate-enable_virtual_node_label&#34;&gt;Activate &lt;code&gt;enable_virtual_node_label&lt;/code&gt;&lt;/h4&gt;
&lt;p&gt;Activating this feature adds the following label and corresponding values:&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Label&lt;/th&gt;
              &lt;th&gt;Possible Values&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;virtual_node&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;unset&lt;/code&gt;, &lt;code&gt;client&lt;/code&gt;, &lt;code&gt;server&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Explicitly indicates the uninstrumented side&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;]]></content><description>&lt;h1 id="service-graphs">Service graphs&lt;/h1>
&lt;p>A service graph is a visual representation of the interrelationships between various services.
Service graphs help you to understand the structure of a distributed system,
and the connections and dependencies between its components:&lt;/p></description></item></channel></rss>