Query your exported logs
Your logs are exported to your storage bucket using Loki’s open source chunk and index formats. This means there are two options available for querying and reading your archived logs:
- Query the archive using the
LogCLI
tool - Query the archive using Loki in read-only mode
Note
Due to the synchronization schedule, the archive does not include the log data for the most recent period. Synchronization is set to present minus N, currentlyN=7d
. So the archive will not include log data from the past week.
Querying the archive using LogCLI
Please download and build the latest LogCLI.
Create a new config file named logcli-config.yaml
. A writable directory is needed to cache files downloaded from the customer bucket.
Example for AWS S3
storage_config:
tsdb_shipper:
active_index_directory: <writable directory to store cache files, for example:'/tmp/loki/index'>
cache_location: <writable directory to store cache files, for example:'/tmp/loki/index_cache'>
boltdb_shipper:
active_index_directory: <writable directory to store cache files, for example:'/tmp/data/index'>
cache_location: <writable directory to store cache files, for example:'/tmp/data/boltdb-cache'>
aws:
s3: s3://<AWS_ACCESS_KEY_ID>:<AWS_SECRET_ACCESS_KEY>@<custom_endpoint>/<bucket_name>
bucketnames: <name of the customer bucket where the archive is stored>
region: <aws region of the customer bucket>
compactor:
working_directory: <writable directory to store cache files, for example:'/tmp/loki'>
query_range:
cache_index_stats_results: false
Example for Azure Blob Storage
storage_config:
tsdb_shipper:
active_index_directory: <writable directory to store cache files, for example:'/tmp/loki/index'>
cache_location: <writable directory to store cache files, for example:'/tmp/loki/index_cache'>
boltdb_shipper:
active_index_directory: <writable directory to store cache files, for example:'/tmp/data/index'>
cache_location: <writable directory to store cache files, for example:'/tmp/data/boltdb-cache'>
azure:
account_name: <storage account name>
account_key: <storage account secret key>
container_name: <name of the customer container where the archive is stored>
compactor:
working_directory: <writable directory to store cache files, for example:'/tmp/loki'>
query_range:
cache_index_stats_results: false
Example for Google Cloud Storage
storage_config:
tsdb_shipper:
active_index_directory: <writable directory to store cache files, for example:'/tmp/loki/index'>
cache_location: <writable directory to store cache files, for example:'/tmp/loki/index_cache'>
boltdb_shipper:
active_index_directory: <writable directory to store cache files, for example:'/tmp/data/index'>
cache_location: <writable directory to store cache files, for example:'/tmp/data/boltdb-cache'>
gcs:
bucket_name: <name of the customer bucket where the archive is stored>
compactor:
working_directory: <writable directory to store cache files, for example:'/tmp/loki'>
query_range:
cache_index_stats_results: false
You should use the configuration file in your call to logCLI:
logcli query --remote-schema --store-config=./logcli-config.yaml \
--schema-store="<gcp,s3,azure>" \
--from="<start date, for example 2022-09-21T09:00:00Z>" \
--to="<end date, for example 2022-09-21T20:15:00Z>" \
--org-id=<tenant-id> \
--output=jsonl \
'<LogQL query eg. '{environment="prod"}'>'
When using the --remote-schema parameter
logCLI will read the <tenant>_schemaconfig.yaml
file from the customer bucket.
Query the archive using Loki
The archived logs can be queried with Loki in read-only mode either in a monolithic or a simple scalable deployment.
Monolithic deployment
To query the logs from the target storage, you must run Loki with the CLI argument -target=querier
. The querier is a Loki component that reads only the data from storage and does not write anything, so it can not modify the content of the archived logs.
To run the querier on a local machine, you can use a docker-compose setup that contains Loki and Grafana. Or you can deploy a similar configuration to your Kubernetes cluster, VM or dedicated server.
Note
This docker-compose setup contains an increased timeout,querier.query_timeout
property set to10m
on the Loki side, andtimeout:600
on the Grafana Datasource side. You might need to increase the value if your query processes a large amount of data. Also, it might be necessary to change the default limits if you reach them.
Create a Loki configuration file called
loki-query-archive.yaml
using the following base configuration example.- Copy the content of the
schemaconfig.yaml
file that is synced to your bucket and add it to the end of the base configuration file.
- Copy the content of the
Create a
docker-compose.query-archive.yaml
file:- Update the
docker-compose.query-archive.yaml
file to replace the{{TENANT_ID}}
with your tenant ID. - Replace
./fixtures/loki-query-archive.yaml
in Loki volumes with a path to the Loki configuration file you created in Step 1.
- Update the
Run the following command to update the querier:
docker-compose -f docker-compose.query-archive.yaml up -d
Launch a browser and navigate to http://localhost:3000 to view Grafana.
Navigate to the Explore page, select a Loki data source, and try to query the archived logs data.
Simple Scalable deployment
In cases when a monolithic deployment is not sufficient due to the amount of data being queried (either because of large query time ranges or high log volume), Loki can also be deployed in Simple Scalable read-only mode.
This deployment mode has the advantage that the query frontend is part of the read
target, which enables query splitting and query sharding.
To set up a read-only deployment, the query path requires both the read
and the backend
target.
Start with following the instructions of steps 1 and 2 from the Monolithic deployment above.
Update the settings in the
loki-query-archive.yaml
configuration file:common: compactor_address: http://backend:3100 compactor: # CLI flag: -compactor.retention-enabled=false retention_enabled: false # make sure compaction does not run # CLI flag: -compactor.compaction-interval=8760h compaction_interval: 8760h
Since the
backend
target contains the compactor, retention and compaction need to be disabled!Update the
docker-compose.query-archive.yaml
file and replace theloki
service with two new services namedloki-read
andloki-backend
and change the-target
command line argument toread
andbackend
respectively.Finally, make make sure that the Loki data source URL in the entrypoint of the
grafana
service points tohttp://loki-backend:3100
rather thanhttp://loki:3100
.Run the docker-compose command and open Grafana in the browser to query the exported logs.