This is archived documentation for v2.1.x. Go to the latest version.
Grafana Mimir querier
The querier is a stateless component that evaluates PromQL expressions by fetching time series and labels on the read path.
How it works
To find the correct blocks to look up at query time, the querier requires an almost up-to-date view of the bucket in long-term storage. The querier performs one of the following actions to ensure that the bucket view is updated:
- Periodically download the bucket index (default)
- Periodically scan the bucket
Queriers do not need any content from blocks except their metadata, which includes the minimum and maximum timestamp of samples within the block.
Bucket index enabled (default)
Queriers lazily download the bucket index when they receive the first query for a given tenant. The querier caches the bucket index in memory and periodically keeps it up-to-date.
The bucket index contains a list of blocks and block deletion marks of a tenant. The querier later uses the list of blocks and block deletion marks to locate the set of blocks that need to be queried for the given query.
When the querier runs with the bucket index enabled, the querier startup time and the volume of API calls to object storage are reduced. We recommend that you keep the bucket index enabled.
Bucket index disabled
When bucket index is disabled, queriers iterate over the storage bucket to discover blocks for all tenants and download the
meta.json of each block. During this initial bucket scanning phase, a querier cannot process incoming queries and its
/ready readiness probe endpoint will not return the HTTP status code
When running, queriers periodically iterate over the storage bucket to discover new tenants and recently uploaded blocks.
Anatomy of a query request
When a querier receives a query range request, the request contains the following parameters:
query: the PromQL query expression (for example,
start: the start time
end: the end time
step: the query resolution (for example,
30yields one data point every 30 seconds)
For each query, the querier analyzes the
end time range to compute a list of all known blocks containing at least one sample within the time range.
For each list of blocks per query, the querier computes a list of store-gateway instances holding the blocks. The querier sends a request to each matching store-gateway instance to fetch all samples for the series matching the
query within the
end time range.
The request sent to each store-gateway contains the list of block IDs that are expected to be queried, and the response sent back by the store-gateway to the querier contains the list of block IDs that were queried. This list of block IDs might be a subset of the requested blocks, for example, when a recent blocks-resharding event occurs within the last few seconds.
The querier runs a consistency check on responses received from the store-gateways to ensure all expected blocks have been queried.
If the expected blocks have not been queried, the querier retries fetching samples from missing blocks from different store-gateways up to
-store-gateway.sharding-ring.replication-factor (defaults to 3) times or maximum 3 times, whichever is lower.
If the consistency check fails after all retry attempts, the query execution fails. Query failure due to the querier not querying all blocks ensures the correctness of query results.
If the query time range overlaps with the
-querier.query-ingesters-within duration, the querier also sends the request to all ingesters.
The request to the ingesters fetches samples that have not yet been uploaded to the long-term storage or are not yet available for querying through the store-gateway.
After all samples have been fetched from both the store-gateways and the ingesters, the querier runs the PromQL engine to execute the query and sends back the result to the client.
Connecting to store-gateways
You must configure the queriers with the same
-store-gateway.sharding-ring.* flags (or their respective YAML configuration parameters) that you use to configure the store-gateways so that the querier can access the store-gateway hash ring and discover the addresses of the store-gateways.
Connecting to ingesters
You must configure the querier with the same
-ingester.ring.* flags (or their respective YAML configuration parameters) that you use to configure the ingesters so that the querier can access the ingester hash ring and discover the addresses of the ingesters.
The querier supports the following cache:
Caching is optional, but highly recommended in a production environment.
Store-gateways and queriers can use Memcached to cache the following bucket metadata:
- List of tenants
- List of blocks per tenant
meta.jsonexistence and content
deletion-mark.jsonexistence and content
Using the metadata cache reduces the number of API calls to long-term storage and stops the number of the API calls that scale linearly with the number of querier and store-gateway replicas.
To enable the metadata cache, set
Note: Currently, only the
memcachedbackend is supported. The Memcached client includes additional configuration available via flags that begin with the prefix
Additional flags for configuring the metadata cache begin with the prefix
-blocks-storage.bucket-store.metadata-cache.*. By configuring the TTL to zero or a negative value, caching of given item type is disabled.
Note: The same Memcached backend cluster should be shared between store-gateways and queriers.
For details about querier configuration, refer to querier.
Related Mimir resources
How to control metrics growth in Prometheus and Kubernetes with Grafana Cloud
This webinar will introduce a metrics cost management framework to optimize metrics growth while keeping rising costs at bay with Grafana Cloud.
Intro to Grafana Mimir: The open source time series database that scales to 1 billion metrics & beyond
Grafana Mimir webinar—learn about our open source solution for extending Prometheus at organizations needing massive scale, rapid query performance.
For billion-series scale or home IoT projects, get started in minutes with Grafana Mimir
Learn how easy it is to get started with Mimir, no matter how many or few time series you need to store.