Grafana Enterprise Traces Set up GET Deploy on Kubernetes with Tanka

Deploy on Kubernetes with Tanka

Using this deployment guide, you can deploy Grafana Enterprise Traces (GET) to Kubernetes using a Jsonnet library and Grafana Tanka to create a development cluster or sand-boxed environment. This procedure uses MinIO to provide object storage regardless of the Cloud platform or on-premise storage you use. In a production environment, you can use your cloud provider’s object storage service to avoid the operational overhead of running object storage in production.

This demo configuration does not include metrics-generator.

Note: This configuration is not suitable for a production environment but can provide a useful way to learn about GET.

Before you begin

To deploy GET to Kubernetes with Tanka, you need:

  • A Kubernetes cluster with at least 40 CPUs and 46GB of memory for the default configuration. Small ingest or query volumes could use a far smaller configuration.
  • kubectl
  • A GET license received from your Grafana account manager or services manager.


To set up GET using Kubernetes with Tanka, you need to:

  1. Configure Kubernetes and install Tanka
  2. Set up the Tanka environment
  3. Install libraries
  4. Deploy MinIO object storage
  5. Deploy GET with the Tanka command

Configure Kubernetes and install Tanka

The first step is to configure Kubernetes and install Tanka.

  1. Create a new directory for the installation, and make it your current working directory:

    mkdir get
    cd get
  2. Create a Kubernetes namespace. You can use any namespace that you wish; this example uses enterprise-traces.

    kubectl create namespace enterprise-traces
  3. Create a Kubernetes Secret for your GET license:

    kubectl --namespace=enterprise-traces create secret generic get-license --from-file=/path/to/license.jwt
  4. Install Grafana Tanka; refer to Installing Tanka.

  5. Install jsonnet-bundler; refer to the jsonnet-bundler README.

Set up the Tanka environment

Tanka requires the current context for your Kubernetes environment.

  1. Acquire the current context for your Kubernetes cluster:

    kubectl config current-context
  2. Initialize Tanka. Replace <KUBECFG CONTEXT NAME> with the acquired context name.

    tk init --k8s=false
    tk env add environments/enterprise-traces
    tk env set environments/enterprise-traces \
     --namespace=enterprise-traces \
     --server-from-context=<KUBECFG CONTEXT NAME>

Install libraries

Install the k.libsonnet, Jsonnet, and Memcachd libraries.

  1. Install k.libsonnet for your version of Kubernetes:

    mkdir -p lib
    export K8S_VERSION=1.22
    jb install${K8S_VERSION}@main
    cat <<EOF > lib/k.libsonnet
    import '${K8S_VERSION}/main.libsonnet'
  2. Install the GET v1.3.0 Jsonnet library and its dependencies.

    jb install
  3. Install the Tempo v1.4.1 Jsonnet library and its dependencies.

    jb install
  4. Install the Memcached library and its dependencies.

    jb install

Deploy MinIO object storage

MinIO is an open source Amazon S3-compatible object storage service that is freely available and easy to run on Kubernetes.

  1. Create a file named minio.yaml and copy the following YAML configuration into it. You may need to remove/modify the storageClassName depending on your Kubernetes platform. GKE, for example, may not support local-path name but may support another option such as standard.

    apiVersion: v1
    kind: PersistentVolumeClaim
      # This name uniquely identifies the PVC. Will be used in deployment below.
      name: minio-pv-claim
        app: minio-storage-claim
      # Read more about access modes here:
        - ReadWriteOnce
      storageClassName: local-path
        # This is the request for storage. Should be available in the cluster.
          storage: 50Gi
    apiVersion: apps/v1
    kind: Deployment
      name: minio
          app: minio
        type: Recreate
            # Label is used as selector in the service.
            app: minio
          # Refer to the PVC created earlier
            - name: storage
                # Name of the PVC created earlier
                claimName: minio-pv-claim
            - name: create-buckets
              image: busybox:1.28
                - "sh"
                - "-c"
                - "mkdir -p /storage/enterprise-traces-data && mkdir -p /storage/enterprise-traces-admin"
                - name: storage # must match the volume name, above
                  mountPath: "/storage"
            - name: minio
              # Pulls the default Minio image from Docker Hub
              image: minio/minio:latest
                - server
                - /storage
                - --console-address
                - ":9001"
                # Minio access key and secret key
                - name: MINIO_ACCESS_KEY
                  value: "minio"
                - name: MINIO_SECRET_KEY
                  value: "minio123"
                - containerPort: 9000
                - containerPort: 9001
                - name: storage # must match the volume name, above
                  mountPath: "/storage"
    apiVersion: v1
    kind: Service
      name: minio
      type: ClusterIP
        - port: 9000
          targetPort: 9000
          protocol: TCP
          name: api
        - port: 9001
          targetPort: 9001
          protocol: TCP
          name: console
        app: minio
  2. Run the following command to apply the minio.yaml file:

    kubectl apply --namespace enterprise-traces -f minio.yaml
  3. To check that MinIO is correctly configured, sign in to MinIO and verify that two buckets have been created. Without these buckets, no data will be stored.

    1. Port-forward MinIO to port 9001:

       kubectl port-forward --namespace enterprise-traces service/minio 9001:9001
    2. Navigate to the MinIO admin bash using your browser: https://localhost:9001. The sign-in credentials are username minio and password minio123.

    3. Verify that the Buckets page lists enterprise-traces-admin and enterprise-traces-data.

  4. Configure the GET cluster using the MinIO object storage by replacing the contents of the environments/enterprise-traces/main.jsonnet file with the following configuration:

    cat <<EOF > environments/enterprise-traces/main.jsonnet
    local get = import '';
    get {
      _images+:: {
        tempo: 'grafana/enterprise-traces:v1.3.0',
      _config+:: {
        namespace: 'enterprise-traces',
        bucket: 'enterprise-traces-data',
        backend: 's3',
        // Set to true the first time installing GET, this will create the tokengen job. Once this job
        // has run this settings should be deleted.
        create_tokengen_job: true,
        metrics_generator+: {
          ephemeral_storage_request_size: '0',
          ephemeral_storage_limit_size: '0',
      tempo_config+:: {
        storage+: {
          trace+: {
            s3: {
                bucket: $._config.bucket,
                access_key: 'minio',
                secret_key: 'minio123',
                endpoint: 'minio:9000',
                insecure: true,
        admin_api+: {
          leader_election: {
            enabled: false,
        admin_client+: {
          storage+: {
            type: 's3',
              s3: {
                bucket_name: 'enterprise-traces-admin',
                access_key_id: 'minio',
                secret_access_key: 'minio123',
                endpoint: 'minio:9000',
                insecure: true,
      tempo_ingester_container+:: {
        securityContext+: {
          runAsUser: 0,
      tempo_distributor_container+:: {
        securityContext+: {
          runAsUser: 0,
      // Deploy tokengen Job available on a first run.
      tokengen_job+::: {},

Note: Both the ingester and distributor container configurations require filesystem write access, which is why in this example we have given them the ability to run as user 0 (root). You should make appropriate changes to your own configuration based on your security policies.

Optional: Reduce component system requirements

Smaller ingestion and query volumes could allow the use of smaller resources. If you wish to lower the resources allocated to components, then you can do this via a container configuration. For example, to change the CPU and memory resource allocation for the ingesters.

To change the resources requirements, follow these steps:

  1. Open the environments/tempo/main.jsonnet file.
  2. Add a new configuration block for the appropriate component (in this case, the ingesters):
    tempo_ingester_container+:: {
        resources+: {
            limits+: {
                cpu: '3',
                memory: '5Gi',
            requests+: {
                cpu: '200m',
                memory: '2Gi',
  3. Save the changes to the file.

Note: Lowering these requirements can impact overall performance.

Deploy GET using Tanka

  1. Deploy GET using the Tanka command:
    tk apply environments/enterprise-traces/main.jsonnet

Note: If the ingesters don’t start after deploying GET with the Tanka command, this may be related to the storage class selected for the Write Ahead Logs. If this is the case, add an appropriate storage class to the ingester configuration. For example, to add a standard instead of fast storage class, add the following to the config (not tempo_config) section in the previous step:

  ingester+: {
    pvc_storage_class: 'standard',
  1. Retrieve the GET token. This can be achieved at examining the logs for the tokengen job:

     kubectl --namespace=enterprise-traces logs job.batch/tokengen --container tokengen

    You should see a line like:

      Token:  X19hZG1pbl9fLWU2Y2U5MTRkNzYzODljNDA6Mlg2My9gMzlcNy8sMjUrXF9YMDM9TWBD
  2. Save this token. You will need it when setting up your tenants and Grafana Enterprise Traces plugin.

Next steps

Refer to Set up the GET plugin for Grafana to integrate your GET cluster with Grafana and a UI to interact with the Admin API.