Important: This documentation is about an older version. It's relevant only to the release noted, many of the features and functions have been updated or replaced. Please view the current version.
Plan your Tempo deployment
Tempo can be deployed in monolithic or microservices modes.
The deployment mode is determined by the runtime configuration target
, or
by using the -target
flag on the command line. The default target is all
,
which is the monolithic deployment mode.
Note
Monolithic mode was previously called single binary mode. Similarly scalable monolithic mode was previously called scalable single binary mode. While the documentation has been updated to reflect this change, some URL names and deployment tooling (for example, Helm charts) do not yet reflect this change.
Monolithic mode
Monolithic mode deployment runs all top-level components in a single process, forming an instance of Tempo. The monolithic mode is the simplest to deploy, but can not horizontally scale out by increasing the quantity of components. Refer to Architecture for descriptions of the components.
To enable this mode, -target=all
is used, which is the default.
Find docker-compose deployment examples at:
- https://github.com/grafana/tempo/tree/main/example/docker-compose/local
- https://github.com/grafana/tempo/tree/main/example/docker-compose/s3
Scaling monolithic mode
Monolithic mode can be horizontally scaled out.
This scalable monolithic mode is similar to the monolithic mode in that all components are run within one process.
Horizontal scale out is achieved by instantiating more than one process, with each having -target
set to scalable-single-binary
.
This mode offers some flexibility of scaling without the configuration complexity of the full microservices deployment.
Each of the queriers
perform a DNS lookup for the frontend_address
and connect to the addresses found within the DNS record.
Find a docker-compose deployment example at:
Microservices mode
In microservices mode, components are deployed in distinct processes. Scaling is per component, which allows for greater flexibility in scaling and more granular failure domains. This is the preferred method for a production deployment, but it is also the most complex.
The configuration associated with each component’s deployment specifies a
target
. For example, to deploy a querier
, the configuration would contain
target: querier
. A command-line deployment may specify the -target=querier
flag. Each of the components referenced in Architecture must be deployed in order to get a working Tempo
instance.
Find a docker-compose deployment example at:
Tools used to deploy Tempo
Tempo can be easily deployed through a number of tools, including Helm, Tanka, Kubernetes, and Docker.
Note
The Tanka and Helm examples are equivalent. They are both provided for people who prefer different configuration mechanisms.
Helm
Helm charts are available in the grafana/helm-charts
repository:
In addition, several Helm chart examples are available in the Tempo repository.
Kubernetes Tempo Operator
The operator is available in grafana/tempo-operator repository.
The operator reconciles TempoStack
resource to deploy and manage Tempo microservices installation.
Refer to the operator documentation for more details.
Tanka/Jsonnet
The Jsonnet files that you need to deploy Tempo with Tanka are available here:
Here are a few examples that use official Jsonnet files. They display the full range of configurations available to Tempo.
Kubernetes manifests
You can find a collection of Kubernetes manifests to deploy Tempo in the operations/jsonnet-compiled folder. These are generated using the Tanka/Jsonnet.