Important: This documentation is about an older version. It's relevant only to the release noted, many of the features and functions have been updated or replaced. Please view the current version.
Apache Parquet schema
Tempo 2.0 uses Apache Parquet as the default column-formatted block format. Refer to the Parquet configuration options for more information.
This document describes the schema used with the Parquet block format.
Fully nested versus span-oriented schema
There are two overall approaches to a columnar schema: fully nested or span-oriented. Span-oriented means a flattened schema where traces are destructured into rows of spans. A fully nested schema means the current trace structures such as Resource/InstrumentationLibrary/Spans/Events are preserved (nested data is natively supported in Parquet). In both cases, individual leaf values such as span name and duration are individual columns.
We chose the nested schema for several reasons:
- The block size is much smaller for the nested schema. This is due to the high data duplication incurred when flattening resource-level attributes such as
service.nameto each individual span. - A flat schema is not truly “flat” because each span still contains nested data such as attributes and events.
- Nested schema is much faster to search for resource-level attributes because the resource-level columns are very small (1 row for each batch).
- Translation to and from the OpenTelemetry Protocol Specification (OTLP) is straightforward.
- Easily add computed columns (for example, trace duration) at multiple levels such as per-trace, per-batch, etc.
Static vs dynamic columns
Dynamic vs static columns add another layer to the schema.
A dynamic schema stores each attribute such as service.name and http.status_code as its own column and the columns in each parquet file can be different.
A static schema is unresponsive to the shape of the data, and all attributes are stored in generic key/value containers.
The dynamic schema is the ultimate dream for a columnar format but it is too complex for a first release. However, the benefits of that approach are also too good to pass up, so we propose a hybrid approach. It is primarily a static schema but with some dynamic columns extracted from trace data based on some heuristics of frequently queried attributes. We plan to continue investing in this direction to implement a fully dynamic schema where trace attributes are blown out into independent Parquet columns at runtime.
For more information, refer to the Parquet design document.
Schema details
The adopted Parquet schema is mostly a direct translation of OTLP but with some key differences.
The table below uses these abbreviations:
rs= resource spansils- InstrumentLibrarySpans
Block Schema display in Parquet Message format
message Trace {
required binary TraceID;
required binary TraceIDText (STRING);
required int64 StartTimeUnixNano (INTEGER(64,false));
required int64 EndTimeUnixNano (INTEGER(64,false));
required int64 DurationNanos (INTEGER(64,false));
required binary RootServiceName (STRING);
required binary RootSpanName (STRING);
repeated group rs { // Resource spans
required group Resource {
repeated group Attrs {
required binary Key (STRING);
optional binary Value (STRING);
optional int64 ValueInt (INTEGER(64,true));
optional double ValueDouble;
optional boolean ValueBool;
optional binary ValueKVList (STRING);
optional binary ValueArray (STRING);
}
required binary ServiceName (STRING);
optional binary Cluster (STRING);
optional binary Namespace (STRING);
optional binary Pod (STRING);
optional binary Container (STRING);
optional binary K8sClusterName (STRING);
optional binary K8sNamespaceName (STRING);
optional binary K8sPodName (STRING);
optional binary K8sContainerName (STRING);
optional binary Test (STRING);
}
repeated group ils { // InstrumentationLibrarySpans
required group il { // InstrumentationLibrary
required binary Name (STRING);
required binary Version (STRING);
}
repeated group Spans {
required binary ID;
required binary Name (STRING);
required int64 Kind (INTEGER(64,true));
required binary ParentSpanID;
required binary TraceState (STRING);
required int64 StartUnixNanos (INTEGER(64,false));
required int64 EndUnixNanos (INTEGER(64,false));
required int64 StatusCode (INTEGER(64,true));
required binary StatusMessage (STRING);
repeated group Attrs {
required binary Key (STRING);
optional binary Value (STRING);
optional int64 ValueInt (INTEGER(64,true));
optional double ValueDouble;
optional boolean ValueBool;
optional binary ValueKVList (STRING);
optional binary ValueArray (STRING);
}
required int32 DroppedAttributesCount (INTEGER(32,true));
repeated group Events {
required int64 TimeUnixNano (INTEGER(64,false));
required binary Name (STRING);
repeated group Attrs {
required binary Key (STRING);
required binary Value;
}
required int32 DroppedAttributesCount (INTEGER(32,true));
optional binary Test (STRING);
}
required int32 DroppedEventsCount (INTEGER(32,true));
required binary Links;
required int32 DroppedLinksCount (INTEGER(32,true));
optional binary HttpMethod (STRING);
optional binary HttpUrl (STRING);
optional int64 HttpStatusCode (INTEGER(64,true));
}
}
}
}Trace-level attributes
For speed and ease-of-use, we are projecting several values to columns at the trace-level:
- Trace ID - Don’t store on each span.
- Root service/span names/StartTimeUnixNano - These are selected properties of the root span in each trace (if there is one). These are used for displaying results in the Grafana UI. These properties are computed at ingest time and stored once for efficiency, so we don’t have to find the root span.
DurationNanos- The total trace duration, computed at ingest time. This powers the min/max duration filtering in the current Tempo search and is more efficient than scanning the spans duration column. However, it may go away with TraceQL or we could decide to change it to span-level duration filtering too.
Dedicated columns
Projecting attributes to their own columns has benefits for speed and size. Therefore we are taking an opinionated approach and projecting some common attributes to their own columns. All other attributes are stored in the generic key/value maps and are still searchable, but not as quickly. We chose these attributes based on what we commonly use ourselves (scratching our own itch), but we think they will be useful to most workloads.
Resource-level attributes include the following:
service.nameclusterandk8s.cluster.namenamespaceandk8s.namespace.namepodandk8s.pod.namecontainerandk8s.container.name
Span-level attributes include the following:
http.methodhttp.urlhttp.status_code(int)
“Any”-type Attributes
OTLP attributes have variable data types, which is easy to accomplish in formats like protocol-buffers, but does not translate directly to Parquet.
Each column must have a concrete type.
There are several possibilities here but we chose to have optional values for each concrete type.
Array and KeyValueList types are stored as protocol-buffer-encoded byte arrays.
repeated group Attrs {
required binary Key (STRING);
# Only one of these will be set
optional binary Value (STRING);
optional boolean ValueBool;
optional double ValueDouble;
optional int64 ValueInt (INT(64,true));
optional binary ValueArray (STRING);
optional binary ValueKVList (STRING);
}Event attributes
Event attributes are stored as protocol-buffer encoded.
repeated group Attrs {
required binary Key (STRING);
required binary Value (STRING);
}Compression and encoding
Parquet has robust support for many compression algorithms and data encodings. We have found excellent combinations of storage size and performance with the following:
- Snappy Compression - Enable on all columns
- Dictionary encoding - Enable on all string columns (including byte array ParentSpanID). Most strings are very repetitive so this works well to optimize storage size. However we can greatly speed up search by inspecting the dictionary first and eliminating pages with no matches.
- Time and duration unix nanos - Delta encoding
- Rarely used columns such as
DroppedAttributesCount- These columns are usually all zeroes, RLE works well.
Bloom filters
Parquet has native support for bloom filters. However, Tempo does not use them at this time. Tempo already has sophisticated support for sharding and caching bloom filters.


