Enterprise

Deploy GET with Helm

The Helm charts for Grafana Enterprise Traces (GET) and Grafana Tempo allow you to configure, install, and upgrade Grafana Tempo or Grafana Enterprise Traces within a Kubernetes cluster.

The recommended method for installing GET is to use the tempo-distributed Helm chart, which deploys Tempo or GET in microservices mode.

Note

Monitoring with Grafana Cloud is required for all GET installations. Professional services will assist you in configuring monitoring for your GET installation.

Get started with Grafana Enterprise Traces using the Helm chart

The tempo-distributed Helm chart allows you to configure, install, and upgrade Grafana Enterprise Traces (GET) within a Kubernetes cluster. Using this procedure, you need to:

  • Create a custom namespace within your Kubernetes cluster
  • Install Helm and the Grafana helm-charts repository
  • Configure a storage option for traces
  • Install GET using Helm
  • Install the GET license
  • Create an additional storage bucket for the admin resources
  • Disable the gateway used in open source Tempo
  • Enable the enterpriseGateway, which is activated when you specify Enterprise

To learn more about Helm, read the Helm documentation.

GET is based on Grafana Tempo 2.8. The Helm chart used in this document is based on the tempo-distributed Helm chart for Tempo 2.8, version 1.48.1.

Before you begin

These instructions are common across any flavor of Kubernetes. This procedure assumes you know how to install, configure, and operate a Kubernetes cluster and that you have an understanding of what the kubectl command does.

Warning

This procedure is primarily aimed at local or development setups. Multiple components, including MinIO and ingress-nginx, are deprecated and shouldn’t be used in production environments.

Hardware requirements

Ensure your environment meets the following hardware requirements:

  • A single Kubernetes node with a minimum of 9 cores and 32 GB RAM

Software requirements

Ensure you have the following software installed:

Additional requirements

Verify that you have:

  • Access to the Kubernetes cluster.
  • Enabled persistent storage in the Kubernetes cluster, which has a default storage class setup.
  • Access to a local storage option (like MinIO) or a storage bucket like Amazon S3, Azure Blob Storage, or Google Cloud Platform. Refer to the Optional: Other storage options section for more information.
  • DNS service works in the Kubernetes cluster. Refer to Debugging DNS resolution in the Kubernetes documentation.
  • Optional: Set up an ingress controller in the Kubernetes cluster, for example ingress-nginx.

Note

If you want to access GET from outside of the Kubernetes cluster, you may need an ingress. Ingress-related procedures are optional.

Note that ingress-nginx is being retired and should not be used in production environments.

Create a custom namespace and add the Helm repository

Using a custom namespace solves problems later on because you don’t have to overwrite the default namespace.

  1. Create a unique Kubernetes namespace, for example, enterprise-traces:

    shell
    kubectl create namespace enterprise-traces

    For more details, see the Kubernetes documentation about Creating a namespace.

  2. Set up a Helm repository using the following commands:

    shell
    helm repo add grafana https://grafana.github.io/helm-charts
    helm repo update

Set Helm chart values

The Helm chart includes a file called values.yaml, which contains default configuration options. In this procedure, you create a local file called custom.yaml in a working directory.

When you use Helm to deploy the chart, you can specify that Helm uses your custom.yaml to augment the default values.yaml file. The custom.yaml file sets the storage and traces options, enables the gateway, and sets the cluster to main. The traces section configures the distributor’s receiver protocols.

After creating the file, you have the option to make changes in that file as needed for your deployment environment.

To customize your Helm chart values:

  1. Create a custom.yaml file in your working directory.
  2. From the examples below, copy and paste Helm chart values into your file.
  3. Save your custom.yaml file.
  4. For simple deployments, use the default storage and minio sections. The Helm chart deploys MinIO. GET uses it to store traces and other information. Further down this page are instructions for customizing your trace storage configuration options.
  5. Set your custom.yaml values to configure the receivers on the Tempo distributor.
  6. Save the changes to your file.

Grafana Enterprise Traces helm chart values

The values in the example below provide configuration values for GET. These values include an additional admin bucket and specify a license. The enterpriseGateway is automatically enabled as part of enabling the chart for installation of GET.

GET requires multitenancy. It’s enabled by setting multitenancyEnabled: true in the values file. For more information, refer to Set up GET tenants.

YAML
# Specify the global domain for the cluster (in this case just local cluster mDNS)
global:
  clusterDomain: "cluster.local"

# Enable the Helm chart for GET installation
# Configure the Helm chart for a Grafana Enterprise Traces installation.
# We set the latest GET version as the image tag (2.8.4).
enterprise:
  enabled: true
  image:
    tag: v2.8.4

# Enable multitenancy for GET (required)
multitenancyEnabled: true

# MinIO storage configuration
# The installs a separate MinIO service/deployment into the same cluster and namespace as the GET install.
# Note: MinIO should not be used for production environments.
minio:
  enabled: true
  mode: standalone
  rootUser: grafana-tempo
  rootPassword: supersecret
  buckets:
    # Bucket for traces storage if enterprise.enabled is true - requires license. This is where all trace span information is stored.
    - name: enterprise-traces
      policy: none
      purge: false
    # Admin client bucket if enterprise.enabled is true - requires license. This is where tenant and administration information is stored.
    - name: enterprise-traces-admin
      policy: none
      purge: false
  # Changed the mc (the MinIO CLI client) config path to '/tmp' from '/etc' as '/etc' is only writable by root and OpenShift will not permit this.
  configPathmc: "/tmp/minio/mc/"
storage:
  # Specifies traces storage location.
  # Uses the MinIO bucket configured for trace storage.
  trace:
    backend: s3
    s3:
      access_key: "grafana-tempo"
      secret_key: "supersecret"
      bucket: "enterprise-traces"
      endpoint: "tempo-minio:9000"
      insecure: true
  # Specifies administration data storage location.
  # Uses the MinIO bucket configured for admin storage.
  admin:
    backend: s3
    s3:
      access_key_id: "grafana-tempo"
      secret_access_key: "supersecret"
      bucket_name: "enterprise-traces-admin"
      endpoint: "tempo-minio:9000"
      insecure: true

# Specifies which trace protocols to accept by the gateway.
# Note: GET's Enterprise gateway will only accept OTLP over gRPC or HTTP.
traces:
  otlp:
    http:
      enabled: true
    grpc:
      enabled: true

# Configure the distributor component to log all received spans.
distributor:
  config:
    log_received_spans:
      enabled: true

# Specify the license. This is the base64 license text you have received from your Grafana Labs representative.
license:
  contents: |
    LICENSEGOESHERE

Enterprise image version

If you require a different version of GET from the default in the Helm chart, update the enterprise configuration section in the custom.yaml values file with the required image version.

This example uses an older image tag of v2.8.0.

You can check for the latest version number by referencing the GET 2.8 release notes.

YAML
enterprise:
  enabled: true
  image:
    tag: v2.8.0

Enterprise license configuration

You need to configure a license by either

  • adding the license to the custom.yaml file or
  • by using a secret that contains the license.

Only use one of these options.

Note

Refer to the Obtain a GET license to learn how to obtain a license.

Using the first option, you can specify the license text in the custom.yaml values file created in the license: section.

YAML
license:
  contents: |
    LICENSEGOESHERE

If you don’t want to specify the license in the custom.yaml file, you can reference a secret that contains the license content.

  1. Create the secret.

    shell
    kubectl -n enterprise-traces create secret generic get-license --from-file=license.jwt
  2. Configure the custom.yaml that you created to reference the secret.

    YAML
    license:
      external: true
      secretName: get-license

Set your storage option

Before you run the Helm chart, you need to configure where to store trace data.

The storage block defined in the values.yaml file configures the storage that Tempo uses for trace storage.

The procedure below configures MinIO as the local storage option managed by the Helm chart. However, you can use another storage provider. Refer to the Optional: Other storage options section.

Note

The MinIO installation included with this Helm chart is for demonstration purposes only. As of October 2025, MinIO no longer provides or supports precompiled Docker images. This chart uses an unsupported version of MinIO.

This configuration sets up a maximum storage size of 5GiB. This MinIO installation isn’t suitable for production environments and should only be used for example purposes. For production, use performant, Enterprise-grade object storage.

The Helm chart values provided include the basic MinIO setup values. If you need to customize them, the steps below walk you through which sections to update. If you don’t need to change the values, you can skip this section.

  1. Optional: Update the configuration options in custom.yaml for your configuration if required.

    YAML
    storage:
      trace:
        backend: s3
        s3:
          access_key: "grafana-tempo"
          secret_key: "supersecret"
          bucket: "tempo-traces"
          endpoint: "tempo-minio:9000"
          insecure: true
  2. Specify an additional bucket for admin resources:

    YAML
    storage:
      admin:
        backend: s3
        s3:
          access_key_id: "grafana-tempo"
          secret_access_key: "supersecret"
          bucket_name: "enterprise-traces-admin"
          endpoint: "tempo-minio:9000"
          insecure: true
  3. Optional: If you need to change the defaults for MinIO, locate the MinIO section and change the relevant fields. Ensure that you update any trace or admin storage sections appropriately.

    YAML
    minio:
      enabled: true
      mode: standalone
      rootUser: minio
      rootPassword: minio123

Optional: Other storage options

You can enable persistent storage in the Kubernetes cluster, which has a default storage class setup. To change the default, refer to the StorageClass using Kubernetes documentation.

This Helm chart guide defaults to using MinIO as a simple solution to get you started. However, you can use a storage bucket like Amazon S3, Azure Blob Storage, or Google Cloud Platform.

Each storage provider has a different configuration stanza. You need to update your configuration based on your storage provider. Refer to the storage configuration block in the Grafana Tempo documentation for information on storage options.

Update the storage configuration options based upon your requirements:

Azure with the local_blocks and metrics-generator processors

By default, the metrics-generator doesn’t require a backend connection unless you’ve enabled the local_blocks processor. The local_blocks processor is used for generating metrics from traces, which is required for TraceQL metrics. When this configuration is set, the metrics-generator produces blocks and flushes them into a backend storage.

In this case, list the generator in the env var expansion configuration so the STORAGE_ACCOUNT_ACCESS_KEY has the secret value.

You can use this configuration example with Helm charts, like tempo-distributed. Replace any values in all caps with the values for your Helm deployment.

YAML
generator:
  extraArgs:
    - "-config.expand-env=true"
  extraEnv:
    - name: <STORAGE_ACCOUNT_ACCESS_KEY>
      valueFrom:
        secretKeyRef:
          name: <TEMPO-TRACES-STG-KEY>
          key: <TEMPO-TRACES-KEY>

For more information about the local_blocks processor, refer to Enable TraceQL metrics queries.

Set traces receivers

The Helm chart values in your custom.yaml file are configured to use OTLP. If you are using other receivers, then you need to configure them.

You can configure Tempo to receive data from OTLP, Jaeger, Zipkin, Kafka, and OpenCensus. The following example enables OTLP on the distributor. For other options, refer to the distributor documentation

The example used in this procedure has OTLP enabled.

Enable any other protocols based on your requirements.

YAML
traces:
  otlp:
    grpc:
      enabled: true
    http:
      enabled: true

Enterprise Gateway is enabled by default, which only receives traces in OTLP gRPC and HTTP protocols.

Optional: Add custom configurations

There are many configuration options available in the tempo-distributed Helm chart. This procedure only covers the minimum configuration required to launch GET in a basic deployment.

You can add values to your custom.yaml file to set custom configuration options that override the defaults present in the Helm chart. The tempo-distributed Helm chart’s README contains a list of available options. The values.yaml files provides the defaults for the Helm chart.

Use the following command to see all of the configurable parameters for the tempo-distributed Helm chart:

shell
helm show values grafana/tempo-distributed

Add the configuration sections to the custom.yaml file. Include this file when you install or upgrade the Helm chart.

Optional: Configure an ingress

An ingress lets you externally access a Kubernetes cluster. Replace <ingress-host> with a suitable hostname that DNS can resolve to the external IP address of the Kubernetes cluster. For more information, refer to Ingress.

Note

If you are using a Linux system and it’s not possible for you set up local DNS resolution, use the --add-host=<ingress-host>:<kubernetes-cluster-external-address> command-line flag to define the <ingress-host> local address for the Docker commands in the examples that follow.

  1. Open your custom.yaml or create a YAML file of Helm values called custom.yaml.

  2. Add the following configuration to the file:

    YAML
    nginx:
      ingress:
        enabled: true
        ingressClassName: nginx
        hosts:
          - host: <ingress-host>
            paths:
              - path: /
                pathType: Prefix
        tls: {} # empty, disabled.
  3. Save the changes.

Optional: Configure TLS with Helm

GET can be configured to communicate between the components using Transport Layer Security, or TLS.

To configure TLS with the Helm chart, you must have a TLS key-pair and CA certificate stored in a Kubernetes secret.

For instructions, refer to Configure TLS with Helm.

Optional: Use global or per-tenant overrides

The tempo-distributed Helm chart provides a module for users to set global or per-tenant override settings:

  • Global overrides come under the overrides property, which pertain to the standard overrides
  • Per-tenant overrides come under the per_tenant_overrides property, and allow specific tenants to alter configuration associated with them as per tenant-specific runtime overrides. The Helm chart generates a /runtime/overrides.yaml configuration file for all per-tenant configuration.

These overrides correlate to the standard (global) and tenant-specific (per_tenant_overide_config) overrides in GET configuration. For more information about overrides, refer to the Overrides configuration documentation.

The following example configuration sets some global configuration options, as well as a set of options for a specific tenant:

YAML
overrides:
  defaults:
    ingestion:
      rate_limit_bytes: 5 * 1000 * 1000
      burst_size_bytes: 5 * 1000 * 1000
      max_traces_per_user: 1000
    global:
      max_bytes_per_trace: 10 * 1000 * 1000

    metrics_generator:
      processors: ["service-graphs", "span-metrics"]

per_tenant_overrides:
  "1234":
    ingestion:
      rate_limit_bytes: 2 * 1000 * 1000
      burst_size_bytes: 2 * 1000 * 1000
      max_traces_per_user: 400
    global:
      max_bytes_per_trace: 5 * 1000 * 1000

This configuration:

  • Enables the Span Metrics and Service Graph metrics-generator processors for all tenants.
  • An ingestion rate and burst size limit of 5MB/s, a maximum trace size of 10MB and a maximum of 1000 live traces in an ingester for all tenants.
  • Overrides the 1234 tenant with a rate and burst size limit of 2MB/s, a maximum trace size of 5MB and a maximum of 400 live traces in an ingester.

Note

Runtime configurations should include all options for a specific tenant.

Install GET using the Helm chart

Use the following command to install GET using the configuration options you’ve specified in the custom.yaml file:

shell
helm -n enterprise-traces install tempo grafana/tempo-distributed --version 1.48.1 -f custom.yaml

The output of the command contains the write and read URLs necessary for the following steps. If you update your values.yaml or custom.yaml, run the same helm install command and replace install with upgrade.

Check the statuses of the GET pods:

shell
kubectl -n enterprise-traces get pods

Wait until all of the pods have a status of Running or Completed, which might take a few minutes.

The output results look similar to this:

shell
❯ k get pods
NAME                                        READY   STATUS      RESTARTS      AGE
tempo-admin-api-7c59c75f6c-wvj75            1/1     Running     0             86m
tempo-compactor-75777b5d8c-5f44z            1/1     Running     0             86m
tempo-distributor-94fd965f4-prkz6           1/1     Running     0             86m
tempo-enterprise-gateway-6d7f78cf97-dhz9b   1/1     Running     0             86m
tempo-ingester-0                            1/1     Running     0             86m
tempo-ingester-1                            1/1     Running     1 (86m ago)   86m
tempo-ingester-2                            1/1     Running     1 (86m ago)   86m
tempo-memcached-0                           1/1     Running     0             86m
tempo-minio-6c4b66cb77-wjfpf                1/1     Running     0             86m
tempo-querier-6cb474546-cwlkz               1/1     Running     0             86m
tempo-query-frontend-6d6566cbf7-pcwg6       1/1     Running     0             86m
tempo-tokengen-job-58jhs                    0/1     Completed   0             86m

Note that the tempo-tokengen-job has emitted a log message containing the initial admin token.

Retrieve the token with this command:

shell
kubectl -n enterprise-traces get pods | awk '/.*-tokengen-job-.*/ {print $1}' | xargs -I {} kubectl -n enterprise-traces logs {} | awk '/Token:\s*/ {print $2}'

To get the logs for the tokengen Pod, you can use:

shell
kubectl -n enterprise-traces get pods | awk '/.*-tokengen-job-.*/ {print $1}' | xargs -I {} kubectl -n enterprise-traces logs {}

Test your installation

The next step is to test your installation by sending trace data to Grafana. You can use the Set up a test application for a Tempo cluster document for step-by-step instructions.

If you already have Grafana available, you can add a Tempo data source using the URL fitting to your environment. For example: http://tempo-query-frontend.enterprise-traces.svc.cluster.local:3100

You may need to install the Enterprise Traces plugin in your Grafana Enterprise instance to allow configuration of tenants, tokens, and access policies. After creating a user and access policy using the plugin, you can configure a data source to point at http://tempo-enterprise-gateway.enterprise-traces.svc.cluster.local:3100.

Set up metamonitoring

Metamonitoring provides observability for your Tempo deployment by collecting metrics and logs from the Tempo components themselves. This helps you monitor the health and performance of your tracing infrastructure. Setting up metamonitoring for Tempo and GET uses the k8s-monitoring Helm chart. For more information about this Helm chart, refer to the k8s-monitoring README.

Configure metamonitoring

To configure metamonitoring, you need to create a metamonitoring-values.yaml file and use the Kubernetes Monitoring Helm chart.

  1. Create a metamonitoring-values.yaml file for the Kubernetes Monitoring Helm chart configuration. Replace the following values with your monitoring backend details:

    • tempo: A descriptive name for your cluster and namespace
    • <url>: Your Prometheus and Loki endpoint URLs
    • <username>: Your username/instance ID
    • <password>: Your password/API key
    YAML
    cluster:
      name: tempo # Name of the cluster, this will populate the cluster label
    
    integrations:
      tempo:
        instances:
          - name: "tempo" # This is the name for the instance label that will be reported.
            namespaces:
              - tempo # This is the namespace that will be searched for tempo instances, change this accordingly
            metrics:
              enabled: true
              portName: prom-metrics
            logs:
              enabled: true
            labelSelectors:
              app.kubernetes.io/name: tempo
    
      alloy:
        name: "alloy-tempo"
    
    destinations:
      - name: "tempo-metrics"
        type: prometheus
        url: "<url>" # Enter Prometheus URL
        auth:
          type: basic
          username: "<username>" # Enter username
          password: "<password>" # Enter password
    
      - name: "tempo-logs"
        type: loki
        url: "<url>" # Enter Loki URL
        auth:
          type: basic
          username: "<username>" # Enter username
          password: "<password>" # Enter password
    
    alloy-metrics:
      enabled: true
    
    podLogs:
      enabled: true
      gatherMethod: kubernetesApi
      namespaces: [tempo] # Set to namespace
      collector: alloy-singleton
    
    alloy-singleton:
      enabled: true
    
    alloy-metrics:
      enabled: true # This will send Grafana Alloy metrics to ensure the monitoring is working properly.
  2. Install the k8s-monitoring Helm chart:

    Bash
    helm install k8s-monitoring grafana/k8s-monitoring \
      --namespace monitoring \
      --create-namespace \
      -f metamonitoring-values.yaml
  3. Verify the installation:

    shell
    kubectl -n monitoring get pods
  4. You should see pods for the k8s-monitoring components running.

Verify metamonitoring in Grafana

  1. Navigate to your Grafana instance.
  2. Check that metrics are being collected:
    • Go to Explore > Prometheus.
    • Query for Tempo metrics like tempo_build_info or tempo_distributor_spans_received_total.
  3. Check that logs are being collected:
    • Go to Explore > Loki
    • Filter logs by your cluster name and look for Tempo component logs
  4. Set up Tempo monitoring dashboards:

Your Tempo deployment now includes comprehensive metamonitoring, giving you visibility into the health and performance of your tracing infrastructure.

Next steps

After deploying GET with Helm, you can: