Query your exported logs
Your logs are exported to your storage bucket using Loki’s open source chunk and index formats. This means there are two options available for querying and reading your archived logs:
- Query the archive using the 
LogCLItool - Query the archive using Loki in read-only mode
 
Note
Due to the synchronization schedule, the archive does not include the log data for the most recent period. Synchronization is set to present minus N, currently
N=7d. So the archive will not include log data from the past week.
Querying the archive using LogCLI
Please download and build the latest LogCLI.
Create a new config file named logcli-config.yaml. A writable directory is needed to cache files downloaded from the customer bucket.
Example for AWS S3
storage_config:
  tsdb_shipper:
    active_index_directory: <writable directory to store cache files, for example:'/tmp/loki/index'>
    cache_location: <writable directory to store cache files, for example:'/tmp/loki/index_cache'>
  boltdb_shipper:
    active_index_directory: <writable directory to store cache files, for example:'/tmp/data/index'>
    cache_location: <writable directory to store cache files, for example:'/tmp/data/boltdb-cache'>
  aws:
    s3: s3://<AWS_ACCESS_KEY_ID>:<AWS_SECRET_ACCESS_KEY>@<custom_endpoint>/<bucket_name>
    bucketnames: <name of the customer bucket where the archive is stored>
    region: <aws region of the customer bucket>
compactor:
  working_directory: <writable directory to store cache files, for example:'/tmp/loki'>
query_range:
  cache_index_stats_results: falseExample for Azure Blob Storage
storage_config:
   tsdb_shipper:
      active_index_directory: <writable directory to store cache files, for example:'/tmp/loki/index'>
      cache_location: <writable directory to store cache files, for example:'/tmp/loki/index_cache'>
   boltdb_shipper:
      active_index_directory: <writable directory to store cache files, for example:'/tmp/data/index'>
      cache_location: <writable directory to store cache files, for example:'/tmp/data/boltdb-cache'>
  azure:
    account_name: <storage account name>
    account_key: <storage account secret key>
    container_name: <name of the customer container where the archive is stored>
compactor:
   working_directory: <writable directory to store cache files, for example:'/tmp/loki'>
query_range:
   cache_index_stats_results: falseExample for Google Cloud Storage
storage_config:
   tsdb_shipper:
      active_index_directory: <writable directory to store cache files, for example:'/tmp/loki/index'>
      cache_location: <writable directory to store cache files, for example:'/tmp/loki/index_cache'>
   boltdb_shipper:
      active_index_directory: <writable directory to store cache files, for example:'/tmp/data/index'>
      cache_location: <writable directory to store cache files, for example:'/tmp/data/boltdb-cache'>
  gcs:
    bucket_name: <name of the customer bucket where the archive is stored>
compactor:
   working_directory: <writable directory to store cache files, for example:'/tmp/loki'>
query_range:
   cache_index_stats_results: falseYou should use the configuration file in your call to logCLI:
logcli query --remote-schema --store-config=./logcli-config.yaml \
 --schema-store="<gcp,s3,azure>" \
--from="<start date, for example 2022-09-21T09:00:00Z>" \
--to="<end date, for example 2022-09-21T20:15:00Z>" \
--org-id=<tenant-id> \
--output=jsonl \
'<LogQL query eg. '{environment="prod"}'>'When using the --remote-schema parameter logCLI will read the <tenant>_schemaconfig.yaml file from the customer bucket.
Query the archive using Loki
The archived logs can be queried with Loki in read-only mode either in a monolithic or a simple scalable deployment.
Monolithic deployment
To query the logs from the target storage, you must run Loki with the CLI argument -target=querier. The querier is a Loki component that reads only the data from storage and does not write anything, so it can not modify the content of the archived logs.
To run the querier on a local machine, you can use a docker-compose setup that contains Loki and Grafana. Or you can deploy a similar configuration to your Kubernetes cluster, VM or dedicated server.
Note
This docker-compose setup contains an increased timeout,
querier.query_timeoutproperty set to10mon the Loki side, andtimeout:600on the Grafana Datasource side. You might need to increase the value if your query processes a large amount of data. Also, it might be necessary to change the default limits if you reach them.
Create a Loki configuration file called
loki-query-archive.yamlusing the following base configuration example.auth_enabled: true server: http_listen_port: 3100 http_server_read_timeout: 10m http_server_write_timeout: 10m memberlist: join_members: - loki:7946 compactor: working_directory: /loki limits_config: query_timeout: 10m common: path_prefix: /loki ring: instance_addr: 127.0.0.1 kvstore: store: inmemory replication_factor: 1 compactor_address: loki:3100 storage_config: tsdb_shipper: active_index_directory: /loki/index cache_location: /loki/index_cache boltdb_shipper: active_index_directory: /data/index cache_location: /data/boltdb-cache # configure the access to your bucket here. S3 example below. aws: bucketnames: <bucketname> s3forcepathstyle: true region: us-east-1 access_key_id: <key> secret_access_key: <key> endpoint: s3.dualstack.us-east-1.amazonaws.com # Copy your `schema_config` from `schemaconfig.yaml` file that is synced to your bucket and insert here.- Copy the content of the 
schemaconfig.yamlfile that is synced to your bucket and add it to the end of the base configuration file. 
- Copy the content of the 
 Create a
docker-compose.query-archive.yamlfile:services: loki: image: grafana/loki:3.2.1 command: '-config.file=/etc/loki/config.yaml' ports: - '3100:3100' - 7946 - 9095 volumes: - ./fixtures/loki-query-archive.yaml:/etc/loki/config.yaml:ro networks: - grafana-loki grafana: image: grafana/grafana:11.3.0 environment: - GF_PATHS_PROVISIONING=/etc/grafana/provisioning - GF_AUTH_ANONYMOUS_ENABLED=true - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin depends_on: - loki entrypoint: - sh - -euc - | mkdir -p /etc/grafana/provisioning/datasources cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml apiVersion: 1 datasources: - name: Loki type: loki access: proxy url: http://loki:3100 jsonData: timeout: 600 httpHeaderName1: "X-Scope-OrgID" secureJsonData: httpHeaderValue1: "{{TENANT_ID}}" EOF /run.sh ports: - '3000:3000' networks: - grafana-loki networks: grafana-loki: {}- Update the 
docker-compose.query-archive.yamlfile to replace the{{TENANT_ID}}with your hosted logs tenant ID. - Replace 
./fixtures/loki-query-archive.yamlin Loki volumes with a path to the Loki configuration file you created in Step 1. 
- Update the 
 Run the following command to update the querier:
docker-compose -f docker-compose.query-archive.yaml up -dLaunch a browser and navigate to http://localhost:3000 to view Grafana.
Navigate to the Explore page, select a Loki data source, and try to query the archived logs data.
Simple Scalable deployment
In cases when a monolithic deployment is not sufficient due to the amount of data being queried (either because of large query time ranges or high log volume), Loki can also be deployed in Simple Scalable read-only mode.
This deployment mode has the advantage that the query frontend is part of the read target, which enables query splitting and query sharding.
To set up a read-only deployment, the query path requires both the read and the backend target.
Start with following the instructions of steps 1 and 2 from the Monolithic deployment above.
Update the settings in the
loki-query-archive.yamlconfiguration file:common: compactor_address: http://backend:3100 compactor: # CLI flag: -compactor.retention-enabled=false retention_enabled: false # make sure compaction does not run # CLI flag: -compactor.compaction-interval=8760h compaction_interval: 8760hSince the
backendtarget contains the compactor, retention and compaction need to be disabled!Update the
docker-compose.query-archive.yamlfile and replace thelokiservice with two new services namedloki-readandloki-backendand change the-targetcommand line argument toreadandbackendrespectively.services: loki-read: image: grafana/loki:3.2.1 command: - -config.file=/etc/loki/config.yaml - -target=read ports: - '3100:3100' - 7946 - 9095 volumes: - ./fixtures/loki-query-archive.yaml:/etc/loki/config.yaml:ro networks: - grafana-loki deploy: mode: replicated replicas: 3 loki-backend: image: grafana/loki:3.2.1 command: - -config.file=/etc/loki/config.yaml - -target=backend ports: - '3100:3100' - 7946 - 9095 volumes: - ./fixtures/loki-query-archive.yaml:/etc/loki/config.yaml:ro networks: - grafana-lokiFinally, make make sure that the Loki data source URL in the entrypoint of the
grafanaservice points tohttp://loki-backend:3100rather thanhttp://loki:3100.Run the docker-compose command and open Grafana in the browser to query the exported logs.



