Troubleshoot log queries (READ)
This guide helps you troubleshoot errors that occur when querying logs from Loki. When Loki rejects or fails query requests, it’s typically due to query syntax errors, exceeding limits, timeout issues, or storage access problems.
Before you begin, ensure you have the following:
- Access to Grafana Loki logs and metrics
- Understanding of LogQL query language basics
- Permissions to configure limits and settings if needed
Monitoring query errors
Query errors can be observed using these Prometheus metrics:
loki_request_duration_seconds- Query latency by route and status codeloki_logql_querystats_bytes_processed_per_seconds- Bytes processed during queriesloki_frontend_query_range_duration_seconds_bucket- Frontend query latency
You can set up alerts on 4xx and 5xx status codes to detect query problems early. This can be helpful when tuning limits configurations.
LogQL parse errors
Parse errors occur when the LogQL query syntax is invalid. Loki returns HTTP status code 400 Bad Request for all parse errors.
Error: Failed to parse the log query
Error message:
failed to parse the log query
Or with position details:
parse error at line <line>, col <col>: <message>
Cause:
The LogQL query contains syntax errors. This could be due to:
- Missing or mismatched brackets, quotes, or braces
- Invalid characters or operators
- Incorrect function syntax
- Invalid duration format
Common examples:
Resolution:
Start with a simple stream selector: {job="app"}, then add filters and operations incrementally to identify syntax issues.
- Check bracket matching - Ensure all
{,},(,),[,]are properly closed. - Verify string quoting - All label values and filter strings must be quoted.
- Use valid duration units - Use
ns,us,ms,s,m,h,d,w,y, for example,5mnot5minutes. - Review operator syntax - Ensure label matchers use proper operators (
=,!=,=~,!~). Check the LogQL documentation for correct operator usage. - Use Grafana Assistant - If you are a Cloud Logs user, you can use Grafana Assistant to write or revise your query using natural language, for example, “What errors occurred for application foo in the last hour?”
Properties:
- Enforced by: Query Frontend/Querier
- Retryable: No (query must be fixed)
- HTTP status: 400 Bad Request
- Configurable per tenant: No
Error: At least one equality matcher required
Error message:
parse error : queries require at least one regexp or equality matcher that does not have an empty-compatible value. For instance, app=~".*" does not meet this requirement, but app=~".+" will
Cause:
The query uses only negative matchers (!=, !~) or matchers that match empty strings (=~".*"), which would select all streams. This is prevented to protect against accidentally querying the entire database.
Invalid examples:
{foo!="bar"}
{app=~".*"}
{foo!~"bar|baz"}Valid examples:
{foo="bar"}
{app=~".+"}
{app="baz", foo!="bar"}Resolution:
- Add at least one positive matcher that selects specific streams.
- Use
.+instead of.*in regex matchers to require at least one character. - Add additional label selectors to narrow down the query scope.
- Use Grafana Assistant - If you are a Cloud Logs user, you can use Grafana Assistant to write or revise your query using natural language, for example, “Find logs containing ’foo’ but not ‘bar’ or ‘baz’.”
Properties:
- Enforced by: Query Frontend
- Retryable: No (query must be fixed)
- HTTP status: 400 Bad Request
- Configurable per tenant: No
Error: Only label matchers are supported
Error message:
only label matchers are supported
Cause:
The query was passed to an API that only accepts label matchers (like the series API), but included additional expressions like line filters or parsers.
Resolution:
Use only stream selectors for APIs that don’t support full LogQL:
# Valid for series API {app="foo", env="prod"} # Invalid for series API {app="foo"} |= "error"
Properties:
- Enforced by: API handler
- Retryable: No (query must be fixed)
- HTTP status: 400 Bad Request
- Configurable per tenant: No
Error: Log queries not supported as instant query type
Error message:
log queries are not supported as an instant query type, please change your query to a range query type
Cause:
A
log query (one that returns log lines rather than metrics) was submitted to the instant query endpoint (/loki/api/v1/query). Log queries must use the range query endpoint.
Resolution:
Convert to a range query Convert log queries to range queries with a time range. Range queries are the default in Grafana Explore.
Use the range query endpoint
/loki/api/v1/query_rangefor log queries.Convert to a metric query if you need to use instant queries:
# This is a log query (returns logs) {app="foo"} |= "error" # This is a metric query (can be instant) count_over_time({app="foo"} |= "error"[5m])Use Grafana Assistant - If you are a Cloud Logs user, you can use Grafana Assistant to write or revise your query.
Properties:
- Enforced by: Query API
- Retryable: No (use correct endpoint or query type)
- HTTP status: 400 Bad Request
- Configurable per tenant: No
Error: Invalid aggregation without unwrap
Error message:
parse error : invalid aggregation sum_over_time without unwrap
Cause:
Aggregation functions like sum_over_time, avg_over_time, min_over_time, max_over_time require an unwrap expression to extract a numeric value from log lines.
Resolution:
Add an unwrap expression to extract the numeric label:
# Invalid sum_over_time({app="foo"} | json [5m]) # Valid - unwrap a numeric label sum_over_time({app="foo"} | json | unwrap duration [5m])
Properties:
- Enforced by: Query Parser
- Retryable: No (query must be fixed)
- HTTP status: 400 Bad Request
- Configurable per tenant: No
Error: Invalid aggregation with unwrap
Error message:
parse error : invalid aggregation count_over_time with unwrap
Cause:
The count_over_time function doesn’t use unwrapped values - it just counts log lines. Using it with unwrap is invalid.
Resolution:
Remove the unwrap expression for count_over_time:
# Invalid count_over_time({app="foo"} | json | unwrap duration [5m]) # Valid count_over_time({app="foo"} | json [5m])Use sum_over_time if you want to sum unwrapped values.
Properties:
- Enforced by: Query Parser
- Retryable: No (query must be fixed)
- HTTP status: 400 Bad Request
- Configurable per tenant: No
Query limit errors
These errors occur when queries exceed configured resource limits. They return HTTP status code 400 Bad Request.
Error: Maximum series reached
Error message:
maximum number of series (<limit>) reached for a single query; consider reducing query cardinality by adding more specific stream selectors, reducing the time range, or aggregating results with functions like sum(), count() or topk()
Cause:
The query matches more unique label combinations (series) than the configured limit allows. This protects against queries that would consume excessive memory.
Default configuration:
max_query_series: 500 (default)
Resolution:
Add more specific stream selectors to reduce cardinality:
# Too broad {job="ingress-nginx"} # More specific {job="ingress-nginx", namespace="production", pod=~"ingress-nginx-.*"}Reduce the time range of the query.
Use label filters to narrow down results:
{job="app"} |= "error"Use aggregation functions to reduce cardinality:
sum by (status) (rate({job="nginx"} | json [5m]))Increase the limit if resources allow:
limits_config: max_query_series: 1000 #default is 500
Properties:
- Enforced by: Query Frontend
- Retryable: No (query must be modified)
- HTTP status: 400 Bad Request
- Configurable per tenant: Yes
Error: Cardinality issues
Error message:
cardinality limit exceeded for {}; 100001 entries, more than limit of 100000
Cause:
The query produces results with too many unique label combinations. This protects against queries that would generate excessive memory usage and slow performance.
Default configuration:
cardinality_limit: 100000
Resolution:
Use more specific label selectors to reduce the number of unique streams.
Apply aggregation functions to reduce cardinality:
sum by (status) (rate({job="nginx"}[5m]))Use
by()orwithout()clauses to group results and reduce dimensions:sum by (status, method) (rate({job="nginx"} | json [5m]))Another alternative is using
droporkeepto reduce the number of labels and hence the cardinality:# Drop high-cardinality labels like request_id or trace_id {job="nginx"} | json | drop request_id, trace_id, session_id # Keep only the labels you need {job="nginx"} | json | keep status, method, pathIncrease the limit if needed:
limits_config: cardinality_limit: 200000 #default is 100000
Properties:
- Enforced by: Query Engine
- Retryable: No (query must be modified)
- HTTP status: 400 Bad Request
- Configurable per tenant: Yes
Error: Max entries limit per query exceeded
Error message:
max entries limit per query exceeded, limit > max_entries_limit_per_query (<requested> > <limit>)
Cause:
The query requests more log entries than the configured maximum. This applies to log queries (not metric queries).
Default configuration:
max_entries_limit_per_query: 5000
Resolution:
Reduce the limit parameter in your query request.
Add more specific filters to return fewer results:
{app="foo"} |= "error"Reduce the time range of the query.
Increase the limit if needed:
limits_config: max_entries_limit_per_query: 10000 #default is 5000
Properties:
- Enforced by: Querier/Query Frontend
- Retryable: No (query must be modified)
- HTTP status: 400 Bad Request
- Configurable per tenant: Yes
Error: Query would read too many bytes
Error message:
the query would read too many bytes (query: <size>, limit: <limit>); consider adding more specific stream selectors or reduce the time range of the query
Cause:
The estimated data volume for the query exceeds the configured limit. This is determined before query execution using index statistics.
Default configuration:
max_query_bytes_read: 0B (disabled by default)
Resolution:
Add more specific stream selectors to reduce data volume.
Reduce the time range of the query.
Increase the limit if resources allow:
limits_config: max_query_bytes_read: 10GB
Properties:
- Enforced by: Query Frontend
- Retryable: No (query must be modified)
- HTTP status: 400 Bad Request
- Configurable per tenant: Yes
Error: Too many chunks (count)
Error message:
the query hit the max number of chunks limit (limit: 2000000 chunks)
Cause:
The number of chunks that the query would read exceeds the configured limit. This protects against queries that would scan excessive amounts of data and consume too much memory.
Default configuration:
max_chunks_per_query: 2000000
Resolution:
Narrow stream selectors to reduce the number of matching chunks:
# Too broad {job="app"} # More specific {job="app", environment="production", namespace="api"}Reduce the query time range to scan fewer chunks.
Increase the limit if resources allow:
limits_config: max_chunks_per_query: 5000000 #default is 2000000
Properties:
- Enforced by: Query Frontend
- Retryable: No (query must be modified)
- HTTP status: 400 Bad Request
- Configurable per tenant: Yes
Error: Stream matcher limits
Error message:
max streams matchers per query exceeded, matchers-count > limit (1500 > 1000)
Cause:
The query contains too many stream matchers. This limit prevents queries with excessive complexity that could impact query performance.
Default configuration:
max_streams_matchers_per_query: 1000
Resolution:
Simplify your query by using fewer label matchers.
Combine multiple queries instead of using many OR conditions.
Use regex matchers to consolidate multiple values:
# Good: 3 matchers using regex patterns {cluster="prod", namespace=~"api|web", pod=~"nginx-.*"}Increase the limit if needed:
limits_config: max_streams_matchers_per_query: 2000 #default is 1000
Properties:
- Enforced by: Querier
- Retryable: No (query must be modified)
- HTTP status: 400 Bad Request
- Configurable per tenant: Yes
Error: Query too large for single querier
Error message:
query too large to execute on a single querier: (query: <size>, limit: <limit>); consider adding more specific stream selectors, reduce the time range of the query, or adjust parallelization settings
Or for un-shardable queries:
un-shardable query too large to execute on a single querier: (query: <size>, limit: <limit>); consider adding more specific stream selectors or reduce the time range of the query
Cause:
Even after query splitting and sharding, individual query shards exceed the per-querier byte limit.
Default configuration:
max_querier_bytes_read: 150GB (per querier)
Resolution:
Add more specific stream selectors.
Reduce the time range or Break large queries into smaller time ranges.
Simplify the query if possible - some queries cannot be sharded.
Increase the limit (requires more querier resources):
limits_config: max_querier_bytes_read: 200GB # default is 150GBScale querier resources
Properties:
- Enforced by: Query Frontend
- Retryable: No (query must be modified)
- HTTP status: 400 Bad Request
- Configurable per tenant: Yes
Error: Interval value exceeds limit
Error message:
[interval] value exceeds limit
Cause:
The range vector interval (in brackets like [5m]) exceeds configured limits.
Resolution:
Reduce the range interval in your query:
# If [1d] is too large, try smaller intervals rate({app="foo"}[1h])Check your configuration for
max_query_lengthlimits. The default is30d1h.
Properties:
- Enforced by: Query Engine
- Retryable: No (query must be modified)
- HTTP status: 400 Bad Request
- Configurable per tenant: Yes
Time range errors
These errors relate to the time range specified in queries.
Error: Query time range exceeds limit
Error message:
the query time range exceeds the limit (query length: <duration>, limit: <limit>)
Cause:
The difference between the query’s start and end time exceeds the maximum allowed query length.
Default configuration:
max_query_length: 721h (30 days + 1 hour)
Resolution:
Reduce the query time range:
# Instead of querying 60 days logcli query '{app="foo"}' --from="60d" --to="now" # Query 30 days or less logcli query '{app="foo"}' --from="30d" --to="now"Increase the limit if storage retention supports it:
limits_config: max_query_length: 2160h # 90 days
Properties:
- Enforced by: Query Frontend/Querier
- Retryable: No (query must be modified)
- HTTP status: 400 Bad Request
- Configurable per tenant: Yes
Error: Data is no longer available
Error message:
this data is no longer available, it is past now - max_query_lookback (<duration>)
Cause:
The entire query time range falls before the max_query_lookback limit. This happens when trying to query data older than the configured lookback period.
Default configuration:
max_query_lookback: 0 (The default value of 0 does not set a limit.)
Resolution:
Query more recent data within the lookback window.
Adjust the lookback limit if the data should be queryable:
limits_config: max_query_lookback: 8760h # 1 yearCaution
The lookback limit should not exceed your retention period.
Properties:
- Enforced by: Query Frontend/Querier
- Retryable: No
- HTTP status: 400 Bad Request
- Configurable per tenant: Yes
Error: Invalid query time range
Error message:
invalid query, through < from (<end> < <start>)
Cause:
The query end time is before the start time, which is invalid.
Resolution:
- Swap start and end times if they were reversed.
- Check timestamp formats to ensure times are correctly specified.
Properties:
- Enforced by: Query Frontend/Querier
- Retryable: No (query must be fixed)
- HTTP status: 400 Bad Request
- Configurable per tenant: No
Required labels errors
These errors occur when queries don’t meet configured label requirements.
Error: Missing required matchers
Error message:
stream selector is missing required matchers [<required_labels>], labels present in the query were [<present_labels>]
Cause:
The tenant is configured to require certain label matchers in all queries, but the query doesn’t include them.
Default configuration:
required_labels: [] (none required by default)
Resolution:
Check with your administrator about which labels are required.
Add the required labels to your query:
# If 'namespace' is required {app="foo", namespace="production"}
Properties:
- Enforced by: Query Frontend
- Retryable: No (query must include required labels)
- HTTP status: 400 Bad Request
- Configurable per tenant: Yes
Error: Not enough label matchers
Error message:
stream selector has less label matchers than required: (present: [<labels>], number_present: <count>, required_number_label_matchers: <required>)
Cause:
The tenant is configured to require a minimum number of label matchers, but the query has fewer.
Default configuration:
minimum_labels_number: 0 (no minimum by default)
Resolution:
Add more label matchers to meet the minimum requirement:
# If minimum is 2, add another selector {app="foo", namespace="production"}
Properties:
- Enforced by: Query Frontend
- Retryable: No (query must meet requirements)
- HTTP status: 400 Bad Request
- Configurable per tenant: Yes
Timeout errors
Timeout errors occur when queries take too long to execute.
Error: Request timed out
Error message:
request timed out, decrease the duration of the request or add more label matchers (prefer exact match over regex match) to reduce the amount of data processed
Or:
context deadline exceeded
Cause:
The query exceeded the configured timeout. This can happen due to:
- Large time ranges
- High cardinality queries
- Complex query expressions
- Insufficient cluster resources
- Network issues
Default configuration:
query_timeout: 1mserver.http_server_read_timeout: 30sserver.http_server_write_timeout: 30s
Resolution:
Reduce the time range of the query.
Add more specific filters to reduce data processing:
# Less specific (slower) {namespace=~"prod.*"} # More specific (faster) {namespace="production"}Prefer exact matchers over regex when possible.
Add line filters early in the pipeline:
{app="foo"} |= "error" | json | level="error"Increase timeout limits (if resources allow):
limits_config: query_timeout: 5m server: http_server_read_timeout: 5m http_server_write_timeout: 5mUse sampling for exploratory queries
{job="app"} | line_format "{{__timestamp__}} {{.msg}}" | sample 0.1Check for network issues between components.
Properties:
- Enforced by: Query Frontend/Querier
- Retryable: Yes (with modifications)
- HTTP status: 504 Gateway Timeout
- Configurable per tenant: Yes (query_timeout)
Error: Request cancelled by client
Error message:
the request was cancelled by the client
Cause:
The client closed the connection before receiving a response. This is typically caused by:
- Client-side timeout
- User navigating away in Grafana
- Network interruption
Resolution:
- Increase client timeout in Grafana or LogCLI.
- Optimize the query to return faster.
- Check network connectivity between client and Loki.
Properties:
- Enforced by: Client
- Retryable: Yes
- HTTP status: 499 Client Closed Request
- Configurable per tenant: No



