Menu

This is documentation for the next version of Tempo. For the latest stable release, go to the latest version.

Open source

Tempo architecture

This topic provides an overview of the major components of Tempo. Refer to the example setups or deployment options for help deploying.

Tempo Architecture

Tempo comprises of the following top-level components.

Distributor

The distributor accepts spans in multiple formats including Jaeger, OpenTelemetry, Zipkin. It routes spans to ingesters by hashing the traceID and using a distributed consistent hash ring. The distributor uses the receiver layer from the OpenTelemetry Collector. For best performance, it is recommended to ingest OTel Proto. For this reason the Grafana Agent uses the otlp exporter/receiver to send spans to Tempo.

Ingester

The Ingester batches trace into blocks, creates bloom filters and indexes, and then flushes it all to the backend. Blocks in the backend are generated in the following layout:

<bucketname> / <tenantID> / <blockID> / <meta.json>
                                      / <index>
                                      / <data>
                                      / <bloom_0>
                                      / <bloom_1>
                                        ...
                                      / <bloom_n>

Query Frontend

The Query Frontend is responsible for sharding the search space for an incoming query.

Traces are exposed via a simple HTTP endpoint: GET /api/traces/<traceID>

Internally, the Query Frontend splits the blockID space into a configurable number of shards and queues these requests. Queriers connect to the Query Frontend via a streaming gRPC connection to process these sharded queries.

Querier

The querier is responsible for finding the requested trace id in either the ingesters or the backend storage. Depending on parameters it will query both the ingesters and pull bloom/indexes from the backend to search blocks in object storage.

The querier exposes an HTTP endpoint at: GET /querier/api/traces/<traceID>, but its not expected to be used directly.

Queries should be sent to the Query Frontend.

Compactor

The Compactors stream blocks to and from the backend storage to reduce the total number of blocks.

Metrics generator

This is an optional component that derives metrics from ingested traces and writes them to a metrics storage. Refer to the metrics-generator documentation to learn more.