Menu
Grafana Cloud

Query your exported logs

Your logs are exported to your storage bucket using Loki’s open source chunk and index formats. This means there are two options available for querying and reading your archived logs:

  • Query the archive using the LogCLI tool
  • Query the archive using Loki in read-only mode

Note

Due to the synchronization schedule, the archive does not include the log data for the most recent period. Synchronization is set to present minus N, currently N=7d. So the archive will not include log data from the past week.

Querying the archive using LogCLI

Create a new config file named logcli-config.yaml. A writable directory is needed to cache files downloaded from the customer bucket.

Example for AWS S3

yaml
storage_config:
  tsdb_shipper:
    shared_store: aws
    cache_location: <writable directory to store cache files>
  boltdb_shipper:
    shared_store: aws
    cache_location: <writable directory to store cache files>
  aws:
    access_key_id: <aws access key. might be skipped if environment variable `AWS_ACCESS_KEY_ID` is defined >
    secret_access_key: <aws secret access key. might be skipped if environment variable `AWS_SECRET_ACCESS_KEY` is defined>
    bucketnames: <name of the customer bucket where the archive is stored>
    region: <aws region of the customer bucket>

Example for Azure Blob Storage

yaml
storage_config:
  tsdb_shipper:
    shared_store: azure
    cache_location: <writable directory to store cache files>
  boltdb_shipper:
    shared_store: azure
    cache_location: <writable directory to store cache files>
  azure:
    account_name: <storage account name>
    account_key: <storage account secret key>
    container_name: <name of the customer container where the archive is
stored>

Example for Google Cloud Storage

yaml
storage_config:
  tsdb_shipper:
    shared_store: gcs
    cache_location: <writable directory to store cache files>
  boltdb_shipper:
    shared_store: gcs
  cache_location: <writable directory to store cache files>
    gcs:
      bucket_name: <name of the customer bucket where the archive is stored>

You should use the configuration file in your call to logCLI:

logcli query --remote-schema --store-config=./logcli-config.yaml \
--from="<start date, for example 2022-09-21T09:00:00Z>" \
--to="<end date, for example 2022-09-21T20:15:00Z>" \
--org-id=<tenant-id> \
--output=jsonl \
'<LogQL query>'

When using the --remote-schema parameter logCLI will read the schemaconfig.yaml file from the customer bucket.

Query the archive using Loki in read-only mode

To query the logs from the target storage, you must run Loki with the CLI argument -target=querier. The querier is a Loki component that reads only the data from storage and does not write anything, so it can not modify the content of the archived logs.

To run the querier on a local machine, you can use a docker-compose setup that contains Loki and Grafana. Or you can deploy a similar configuration to your Kubernetes cluster, VM or dedicated server.

Note

This docker-compose setup contains an increased timeout, querier.query_timeout property set to 10m on the Loki side, and timeout:600 on the Grafana Datasource side. You might need to increase the value if your query processes a large amount of data. Also, it might be necessary to change the default limits if you reach them.
  1. Create a Loki configuration file called loki-query-archive.yaml using the following base configuration example.

    yaml
    auth_enabled: true
    
    server:
      http_listen_port: 3100
      http_server_read_timeout: 10m
      http_server_write_timeout: 10m
    
    memberlist:
      join_members:
        - loki:7946
    
    querier:
      query_timeout: 10m
      query_store_only: true
    
    common:
      path_prefix: /loki
      replication_factor: 1
      compactor_address: loki:3100
      storage:
        # configure the access to your bucket
    # Copy your `schema_config` from `schemaconfig.yaml` file that is synced to your bucket and insert here.
    1. Copy the content of the schemaconfig.yaml file that is synced to your bucket and add it to the end of the base configuration file.
  2. Configure the access to your bucket:

    • for Amazon S3 put it in common.storage.s3
    • for Azure Blog Storage put it in common.storage.azure
    • for Google Cloud Storage put it in common.storage.gcs
  3. Create a docker-compose.query-archive.yaml file:

    yaml
    version: '3.8'
    
    services:
      loki:
        image: grafana/loki:2.8.2
        command: '-config.file=/etc/loki/config.yaml -target=querier'
        ports:
          - '3100:3100'
          - 7946
          - 9095
        volumes:
          - ./fixtures/loki-query-archive.yaml:/etc/loki/config.yaml:ro
        networks:
          - grafana-loki
    
      grafana:
        image: grafana/grafana:9.2.0-beta1
        environment:
          - GF_PATHS_PROVISIONING=/etc/grafana/provisioning
          - GF_AUTH_ANONYMOUS_ENABLED=true
          - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
        depends_on:
          - loki
        entrypoint:
          - sh
          - -euc
          - |
            mkdir -p /etc/grafana/provisioning/datasources
            cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml
            apiVersion: 1
            datasources:
              - name: Loki
                type: loki
                access: proxy
                url: http://loki:3100
                jsonData:
                  timeout: 600
                  httpHeaderName1: "X-Scope-OrgID"
                secureJsonData:
                  httpHeaderValue1: "{{TENANT_ID}}"
            EOF
            /run.sh
        ports:
          - '3000:3000'
        networks:
          - grafana-loki
    networks:
      grafana-loki: {}
    1. Update the docker-compose.query-archive.yaml file to replace the {{TENANT_ID}} with your tenant ID.
    2. Replace ./fixtures/loki-query-archive.yaml in Loki volumes with a path to the Loki configuration file you created in Step 1.
  4. Run the following command to update the querier:

    docker-compose -f docker-compose.query-archive.yaml up -d
  5. Launch a browser and navigate to http://localhost:3000 to view Grafana.

  6. Navigate to the Explore page, select a Loki datasource, and try to query the archived logs data.