---
title: "Query your exported logs | Grafana Cloud documentation"
description: "How to query logs that have been exported by the Cloud Logs Exporter and archived."
---

# Query your exported logs

Your logs are exported to your storage bucket using Loki’s open source chunk and index formats. This means there are two options available for querying and reading your archived logs:

- Query the archive using the `LogCLI` tool
- Query the archive using Loki in read-only mode

> Note
> 
> Due to the synchronization schedule, the archive does not include the log data for the most recent period. Synchronization is set to present minus N, currently `N=7d`. So the archive will not include log data from the past week.

## Querying the archive using LogCLI

Please download and build the latest [LogCLI](/docs/loki/next/query/logcli/).

Create a new config file named `logcli-config.yaml`. A writable directory is needed to cache files downloaded from the customer bucket.

*Example for AWS S3*

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
storage_config:
  tsdb_shipper:
    active_index_directory: <writable directory to store cache files, for example:'/tmp/loki/index'>
    cache_location: <writable directory to store cache files, for example:'/tmp/loki/index_cache'>
  boltdb_shipper:
    active_index_directory: <writable directory to store cache files, for example:'/tmp/data/index'>
    cache_location: <writable directory to store cache files, for example:'/tmp/data/boltdb-cache'>
  aws:
    s3: s3://<AWS_ACCESS_KEY_ID>:<AWS_SECRET_ACCESS_KEY>@<custom_endpoint>/<bucket_name>
    bucketnames: <name of the customer bucket where the archive is stored>
    region: <aws region of the customer bucket>

compactor:
  working_directory: <writable directory to store cache files, for example:'/tmp/loki'>

query_range:
  cache_index_stats_results: false
```

*Example for Azure Blob Storage*

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
storage_config:
   tsdb_shipper:
      active_index_directory: <writable directory to store cache files, for example:'/tmp/loki/index'>
      cache_location: <writable directory to store cache files, for example:'/tmp/loki/index_cache'>
   boltdb_shipper:
      active_index_directory: <writable directory to store cache files, for example:'/tmp/data/index'>
      cache_location: <writable directory to store cache files, for example:'/tmp/data/boltdb-cache'>
  azure:
    account_name: <storage account name>
    account_key: <storage account secret key>
    container_name: <name of the customer container where the archive is stored>

compactor:
   working_directory: <writable directory to store cache files, for example:'/tmp/loki'>

query_range:
   cache_index_stats_results: false
```

*Example for Google Cloud Storage*

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
storage_config:
   tsdb_shipper:
      active_index_directory: <writable directory to store cache files, for example:'/tmp/loki/index'>
      cache_location: <writable directory to store cache files, for example:'/tmp/loki/index_cache'>
   boltdb_shipper:
      active_index_directory: <writable directory to store cache files, for example:'/tmp/data/index'>
      cache_location: <writable directory to store cache files, for example:'/tmp/data/boltdb-cache'>
  gcs:
    bucket_name: <name of the customer bucket where the archive is stored>

compactor:
   working_directory: <writable directory to store cache files, for example:'/tmp/loki'>

query_range:
   cache_index_stats_results: false
```

You should use the configuration file in your call to logCLI:

![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```none
logcli query --remote-schema --store-config=./logcli-config.yaml \
 --schema-store="<gcp,s3,azure>" \
--from="<start date, for example 2022-09-21T09:00:00Z>" \
--to="<end date, for example 2022-09-21T20:15:00Z>" \
--org-id=<tenant-id> \
--output=jsonl \
'<LogQL query eg. '{environment="prod"}'>'
```

When using the `--remote-schema parameter` logCLI will read the `<tenant>_schemaconfig.yaml` file from the customer bucket.

## Query the archive using Loki

The archived logs can be queried with Loki in read-only mode either in a monolithic or a simple scalable deployment.

### Monolithic deployment

To query the logs from the target storage, you must run Loki with the CLI argument `-target=querier`. The querier is a Loki component that reads only the data from storage and does not write anything, so it can not modify the content of the archived logs.

To run the querier on a local machine, you can use a docker-compose setup that contains Loki and Grafana. Or you can deploy a similar configuration to your Kubernetes cluster, VM or dedicated server.

> Note
> 
> This docker-compose setup contains an increased timeout, `querier.query_timeout` property set to `10m` on the Loki side, and `timeout:600` on the Grafana Datasource side. You might need to increase the value if your query processes a large amount of data. Also, it might be necessary to change the default limits if you reach them.

1. Create a Loki configuration file called `loki-query-archive.yaml` using the following base configuration example.
   
   YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```yaml
   auth_enabled: true
   
   server:
     http_listen_port: 3100
     http_server_read_timeout: 10m
     http_server_write_timeout: 10m
   
   memberlist:
     join_members:
       - loki:7946
   
   compactor:
     working_directory: /loki
   
   limits_config:
     query_timeout: 10m
   
   common:
     path_prefix: /loki
     ring:
       instance_addr: 127.0.0.1
       kvstore:
         store: inmemory
     replication_factor: 1
     compactor_address: loki:3100
   
   storage_config:
     tsdb_shipper:
       active_index_directory: /loki/index
       cache_location: /loki/index_cache
     boltdb_shipper:
       active_index_directory: /data/index
       cache_location: /data/boltdb-cache
     # configure the access to your bucket here. S3 example below.
     aws:
       bucketnames: <bucketname>
       s3forcepathstyle: true
       region: us-east-1
       access_key_id: <key>
       secret_access_key: <key>
       endpoint: s3.dualstack.us-east-1.amazonaws.com
   # Copy your `schema_config` from `schemaconfig.yaml` file that is synced to your bucket and insert here.
   ```
   
   1. Copy the content of the `schemaconfig.yaml` file that is synced to your bucket and add it to the end of the base configuration file.
2. Create a `docker-compose.query-archive.yaml` file:
   
   YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```yaml
   services:
     loki:
       image: grafana/loki:3.2.1
       command: '-config.file=/etc/loki/config.yaml'
       ports:
         - '3100:3100'
         - 7946
         - 9095
       volumes:
         - ./fixtures/loki-query-archive.yaml:/etc/loki/config.yaml:ro
       networks:
         - grafana-loki
   
     grafana:
       image: grafana/grafana:11.3.0
       environment:
         - GF_PATHS_PROVISIONING=/etc/grafana/provisioning
         - GF_AUTH_ANONYMOUS_ENABLED=true
         - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
       depends_on:
         - loki
       entrypoint:
         - sh
         - -euc
         - |
           mkdir -p /etc/grafana/provisioning/datasources
           cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml
           apiVersion: 1
           datasources:
             - name: Loki
               type: loki
               access: proxy
               url: http://loki:3100
               jsonData:
                 timeout: 600
                 httpHeaderName1: "X-Scope-OrgID"
               secureJsonData:
                 httpHeaderValue1: "{{TENANT_ID}}"
           EOF
           /run.sh
       ports:
         - '3000:3000'
       networks:
         - grafana-loki
   networks:
     grafana-loki: {}
   ```
   
   1. Update the `docker-compose.query-archive.yaml` file to replace the `{{TENANT_ID}}` with your hosted logs tenant ID.
   2. Replace `./fixtures/loki-query-archive.yaml` in Loki volumes with a path to the Loki configuration file you created in Step 1.
3. Run the following command to update the querier:
   
   ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```none
   docker-compose -f docker-compose.query-archive.yaml up -d
   ```
4. Launch a browser and navigate to http://localhost:3000 to view Grafana.
5. Navigate to the Explore page, select a Loki data source, and try to query the archived logs data.

### Simple Scalable deployment

In cases when a monolithic deployment is not sufficient due to the amount of data being queried (either because of large query time ranges or high log volume), Loki can also be deployed in [Simple Scalable](/docs/loki/latest/get-started/deployment-modes/#simple-scalable) read-only mode. This deployment mode has the advantage that the query frontend is part of the `read` target, which enables query splitting and query sharding.

To set up a read-only deployment, the query path requires both the `read` and the `backend` target.

1. Start with following the instructions of steps 1 and 2 from the [Monolithic deployment](#monolithic-deployment) above.
2. Update the settings in the `loki-query-archive.yaml` configuration file:
   
   YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```yaml
   common:
     compactor_address: http://backend:3100
   compactor:
     # CLI flag: -compactor.retention-enabled=false
     retention_enabled: false
     # make sure compaction does not run
     # CLI flag: -compactor.compaction-interval=8760h
     compaction_interval: 8760h
   ```
   
   Since the `backend` target contains the compactor, retention and compaction need to be disabled!
3. Update the `docker-compose.query-archive.yaml` file and replace the `loki` service with two new services named `loki-read` and `loki-backend` and change the `-target` command line argument to `read` and `backend` respectively.
   
   YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```yaml
   services:
     loki-read:
       image: grafana/loki:3.2.1
       command:
         - -config.file=/etc/loki/config.yaml
         - -target=read
       ports:
         - '3100:3100'
         - 7946
         - 9095
       volumes:
         - ./fixtures/loki-query-archive.yaml:/etc/loki/config.yaml:ro
       networks:
         - grafana-loki
       deploy:
         mode: replicated
         replicas: 3
   
     loki-backend:
       image: grafana/loki:3.2.1
       command:
         - -config.file=/etc/loki/config.yaml
         - -target=backend
       ports:
         - '3100:3100'
         - 7946
         - 9095
       volumes:
         - ./fixtures/loki-query-archive.yaml:/etc/loki/config.yaml:ro
       networks:
         - grafana-loki
   ```
4. Finally, make make sure that the Loki data source URL in the entrypoint of the `grafana` service points to `http://loki-backend:3100` rather than `http://loki:3100`.
5. Run the docker-compose command and open Grafana in the browser to query the exported logs.
