Menu

Important: This documentation is about an older version. It's relevant only to the release noted, many of the features and functions have been updated or replaced. Please view the current version.

Documentationbreadcrumb arrow Tempo documentationbreadcrumb arrow Set upbreadcrumb arrow Set up a test application for a Tempo cluster
Open source

Set up a test application for a Tempo cluster

Once you’ve set up a Grafana Tempo cluster, you need to write some traces to it and then query the traces from within Grafana. This procedure uses Tempo in microservices mode. For example, if you set up Tempo using the Kubernetes with Tanka procedure, then you can use this procedure to test your set up.

Before you begin

You’ll need:

  • Grafana 10.0.0 or higher
  • Microservice deployments require the Tempo querier URL, for example: http://tempo-cluster-query-frontend.tempo.svc.cluster.local:3100/
  • OpenTelemetry telemetrygen for generating tracing data

Refer to Deploy Grafana on Kubernetes if you are using Kubernetes. Otherwise, refer to Install Grafana for more information.

Configure Grafana Agent Flow to remote-write to Tempo

We’ll use a Grafana Agent Helm chart deployment to send traces to Tempo.

To do this, you need to create a configuration that can be used by the Agent to receive and export traces in OTLP protobuf format.

  1. Create a new values.yaml file which we’ll use as part of the Agent install.

  2. Edit the values.yaml file and add the following configuration to it:

    yaml
    agent:
      extraPorts:
        - name: otlp-grpc
          port: 4317
          targetPort: 4317
          protocol: TCP
      configMap:
        create: true
        content: |-
          // Creates a receiver for OTLP gRPC.
          // You can easily add receivers for other protocols by using the correct component
          // from the reference list at: https://grafana.com/docs/agent/latest/flow/reference/components/
          otelcol.receiver.otlp "otlp_receiver" {
            // Listen on all available bindable addresses on port 4317 (which is the
            // default OTLP gRPC port) for the OTLP protocol.
            grpc {
              endpoint = "0.0.0.0:4317"
            }
    
            // Output straight to the OTLP gRPC exporter. We would usually do some processing
            // first, most likely batch processing, but for this example we pass it straight
            // through.
            output {
              traces = [
                otelcol.exporter.otlp.tempo.input,
              ]
            }
          }
    
          // Define an OTLP gRPC exporter to send all received traces to GET.
          // The unique label 'tempo' is added to uniquely identify this exporter.
          otelcol.exporter.otlp "tempo" {
              // Define the client for exporting.
              client {
                  // Send to the locally running Tempo instance, on port 4317 (OTLP gRPC).
                  endpoint = "http://tempo-cluster-distributor.tempo.svc.cluster.local:4317"
                  // Disable TLS for OTLP remote write.
                  tls {
                      // The connection is insecure.
                      insecure = true
                      // Do not verify TLS certificates when connecting.
                      insecure_skip_verify = true
                  }
              }
          }

    Ensure that you use the specific namespace you’ve installed Tempo in for the OTLP exporter. In the line:

    yaml
    endpoint = "http://tempo-cluster-distributor.tempo.svc.cluster.local:3100"

    change tempo to reference the namespace where Tempo is installed, for example: http://tempo-cluster-distributor.my-tempo-namespaces.svc.cluster.local:3100.

  3. Deploy the Agent using Helm:

    bash
    helm install -f values.yaml grafana-agent grafana/grafana-agent

    If you wish to deploy the agent into a specific namespace, make sure to create the namespace first and specify it to Helm by appending --namespace=<grafana-agent-namespace> to the end of the command.

Create a Grafana Tempo data source

To allow Grafana to read traces from Tempo, you must create a Tempo data source.

  1. Navigate to Connections > Data Sources.

  2. Click on Add data source.

  3. Select Tempo.

  4. Set the URL to http://<TEMPO-QUERY-FRONTEND-SERVICE>:<HTTP-LISTEN-PORT>/, filling in the path to Tempo’s query frontend service, and the configured HTTP API prefix. If you have followed the Deploy Tempo with Helm installation example, the query frontend service’s URL will look something like this: http://tempo-cluster-query-frontend.<namespace>.svc.cluster.local:3100

  5. Click Save & Test.

You should see a message that says Data source is working.

Visualize your data

Once you have created a data source, you can visualize your traces in the Grafana Explore page. For more information, refer to Tempo in Grafana.

Use OpenTelemetry telemetrygen to generate tracing data

Next, you can use OpenTelemetry telemetrygen to generate tracing data to test your Tempo installation.

In the following instructions we assume the endpoints for both the Grafana Agent and the Tempo distributor are those described above, for example:

  • grafana-agent.grafana-agent.svc.cluster.local for Grafana Agent
  • tempo-cluster-distributor.tempo.svc.cluster.local for the Tempo distributor Replace these appropriately if you have altered the endpoint targets for the following examples.
  1. Install telemetrygen using the installation procedure. NOTE: You don’t need to configure an OpenTelemetry Collector as we are using the Grafana Agent.

  2. Generate traces using telemtrygen:

    bash
    telemetrygen traces --otlp-insecure --rate 20 --duration 5s grafana-agent.grafana-agent.svc.cluster.local:4317

This configuration sends traces to Grafana Agent for 5 seconds, at a rate of 20 traces per second.

Optionally, you can also send the trace directly to the Tempo database without using Grafana Agent as a collector by using the following:

bash
telemetrygen traces --otlp-insecure --rate 20 --duration 5s tempo-cluster-distributor.tempo.svc.cluster.local:4317

If you’re running telemetrygen on your local machine, ensure that you first port-forward to the relevant Agent or Tempo distributor service, eg:

bash
kubectl port-forward services/grafana-agent 4317:4317 --namespace grafana-agent

To view the tracing data:

  1. Go to Grafana and select Explore.

  2. Select the Tempo data source from the list of data sources.

  3. Select the Search Query type.

  4. Select Run query.

  5. Confirm that traces are displayed in the traces Explore panel. You should see 5 seconds worth of traces, 100 traces in total per run of telemetrygen.

Test your configuration using the Intro to MLTP application

The Intro to MLTP application provides an example five-service appliation generates data for Tempo, Mimir, Loki, and Pyroscope. This procedure installs the application on your cluster so you can generate meaningful test data.

  1. Navigate to https://github.com/grafana/intro-to-mltp to get the Kubernetes manifests for the Intro to MLTP application.
  2. Clone the repository using commands similar to the ones below:
    bash
      git clone git+ssh://github.com/grafana/intro-to-mltp
      cp intro-to-mltp/k8s/mythical/* ~/tmp/intro-to-mltp-k8s
  3. Change to the cloned repository: cd intro-to-mltp/k8s/mythical
  4. In the mythical-beasts-deployment.yaml manifest, alter each TRACING_COLLECTOR_HOST environment variable instance value to point to the Grafana Agent location. For example, based on the a Grafana Agent install in the default namespace called and with a Helm installation called test:
    yaml
     	- env:
         ...
         - name: TRACING_COLLECTOR_HOST
           value: grafana-agent.grafana-agent.svc.cluster.local
  5. Deploy the Intro to MLTP application. It deploys into the default namespace.
    bash
        kubectl apply -f mythical-beasts-service.yaml,mythical-beasts-persistentvolumeclaim.yaml,mythical-beasts-deployment.yaml
  6. Once the application is deployed, go to Grafana Enterprise and select the Explore menu item.
  7. Select the Tempo data source from the list of data sources.
  8. Select the Search Query type for the data source.
  9. Select Run query.
  10. Traces from the application will be displayed in the traces Explore panel.