Menu
Open source

Grafana Mimir version 2.10 release notes

Grafana Labs is excited to announce version 2.10 of Grafana Mimir.

The highlights that follow include the top features, enhancements, and bugfixes in this release. For the complete list of changes, see the changelog.

Features and enhancements

  • Added support for rule filtering by passing file, ruler_group and rule_name parameters to the ruler endpoint /api/v1/rules.
  • Added support to only count series that are considered active through the Cardinality API endpoint /api/v1/cardinality/label_values by passing the count_method parameter. You can set it to active to count only series that are considered active according to the -ingester.active-series-metrics-idle-timeout flag setting rather than counting all in-memory series.
  • Reduced the overall memory consumption by changing the internal data structure for labels. Expect ingesters to use around 15% less memory with this change, depending on the pattern of labels used, number of tenants, etc.
  • Reduced the memory usage of the Active Series Tracker in the ingester.
  • Added a buffered logging implementation that can be enabled through the -log.buffered CLI flag. This should reduce contention and resource usage under heavy usage patterns.
  • Improved the performance of the OTLP ingestion and more detailed information was added to the traces in order to make troubleshooting problems easier.
  • Improved the performance of series matching in the store-gateway by always including the __name__ posting group causing a reduction in the number of object storage API calls.
  • Improved the performance of label values with matchers calls when number of matched series is small. If you’re using Grafana to query Grafana Mimir, you’ll need to be sure your Prometheus data source configuration has the Prometheus type set to Mimir and the Version set correctly in order to benefit from this improvement.
  • Support to cache cardinality, label names and label values query responses in query frontend. The cache will be used when -query-frontend.cache-results is enabled, and -query-frontend.results-cache-ttl-for-cardinality-query or -query-frontend.results-cache-ttl-for-labels-query is set to a value greater than 0.
  • Reduced wasted effort spent computing results that won’t be used by having queriers cancel the requests sent to the ingesters in a zone upon receiving first error from that zone.
  • Reduced object storage use by enhancing the compactor to remove the bucket index, markers, and debug files when it detects zero remaining blocks in the bucket index. This cleanup process can be enabled by setting the -compactor.no-blocks-file-cleanup-enabled option to true.
  • Added new debug HTTP endpoints /ingester/tenants and /ingester/tsdb/{tenant} to the ingester that provide debug information about tenants and their TSDBs.
  • Added new metrics for tracking native histograms in active series: cortex_ingester_active_native_histogram_series, cortex_ingester_active_native_histogram_series_custom_tracker, cortex_ingester_active_native_histogram_buckets, cortex_ingester_active_native_histogram_buckets_custom_tracker. The first 2 are the subsets of the existing and unmodified cortex_ingester_active_series and cortex_ingester_active_series_custom_tracker respectively, only tracking native histogram series, and the last 2 are the equivalent for tracking the number of buckets in native histogram series.

Additionally, the following previously experimental features are now considered stable:

  • Support for a ruler storage cache. This cache should reduce the number of “list objects” API calls issued to the object storage when there are 2+ ruler replicas running in a Mimir cluster. The cache can be configured by setting the -ruler-storage.cache.* CLI flags or their respective YAML config options.
  • Query sharding cardinality estimation. This feature allows query sharding to take into account cardinality of similar requests executed previously when computing the maximum number of shards to use. You can enable it through the advanced CLI configuration flag -query-frontend.query-sharding-target-series-per-shard; we recommend starting with a value of 2500.
  • Query expression size limit. You can limit the size in bytes of the queries allowed to be processed through the CLI configuration flag -query-frontend.max-query-expression-size-bytes.
  • Peer discovery / tenant sharding for overrides exporters. You can enable it through the CLI configuration flag -overrides-exporter.ring.enabled.
  • Overrides exporter enabled metrics selection. You can select which metrics the overrides exporter should export through the CLI configuration flag -overrides-exporter.enabled-metrics.
  • Per-tenant results cache TTL. The time-to-live duration for cached query results can be configured using the results_cache_ttl and results_cache_ttl_for_out_of_order_time_window parameters.

Experimental features

Grafana Mimir 2.10 includes new features that are considered as experimental and disabled by default. Please use them with caution and report any issues you encounter:

  • Support for ingesting exponential histograms in OpenTelemetry format. The exponential histograms that are over the native histogram scale limit of 8 are downscaled to allow their ingestion.
  • Store-gateway index-header loading improvements, which include the ability to persist the sparse index-header to disk instead of reconstructing it on every restart (-blocks-storage.bucket-store.index-header-sparse-persistence-enabled) as well as the ability to persist the list of block IDs that were lazy-loaded while running to eagerly load them upon startup to prevent starting up with no loaded blocks (-blocks-storage.bucket-store.index-header.eager-loading-startup-enabled) and an option to limit the number of concurrent index-header loads when lazy-loading (-blocks-storage.bucket-store.index-header-lazy-loading-concurrency).
  • Option to allow queriers to reduce pressure on ingesters by initially querying only the minimum set of ingesters required to reach quorum. (-querier.minimize-ingester-requests).
  • Early TSDB Head compaction in the ingesters to reduce in-memory series when a certain threshold is reached. Useful to deal with high series churning rate. (-blocks-storage.tsdb.early-head-compaction-min-in-memory-series).
  • Spread-minimizing token generation algorithm for the ingesters. This new method drastically reduces the difference in series pushed to different ingesters. Please note that a migration process is required to switch from previous random generation algorithm, which will be detailed once the feature is declared stable.
  • Support for chunks streaming from store-gateways to queriers that should reduce the memory usage in the queriers. Can be enabled through the -querier.prefer-streaming-chunks-from-store-gateways option.
  • Support for circuit-breaking the distributor write requests to the ingesters. This can be enabled through the -ingester.client.circuit-breaker.* configuration options and should serve to let ingesters recover when under high pressure.
  • Support to limit read requests based on CPU/memory utilization. This should alleviate pressure on the ingesters after receiving heavy queries and reduce the likelihood of disrupting the write path. (-ingester.read-path-cpu-utilization-limit, -ingester.read-path-memory-utilization-limit, -ingester.log-utilization-based-limiter-cpu-samples).

Helm chart improvements

The Grafana Mimir and Grafana Enterprise Metrics Helm chart is now released independently. See the Grafana Mimir Helm chart documentation.

Important changes

In Grafana Mimir 2.10 we have changed the following behaviors:

  • Query requests are initiated only to ingesters in the ACTIVE state in the ring. This is not expected to introduce any degradation in terms of query results correctness or high-availability.
  • Per-instance limit errors are not logged anymore, to reduce resource usage when ingesters are under pressure. We encourage you to use metrics and alerting to monitor them instead. The following metrics have been added to count the number of requests rejected for hitting per-instance limits:
    • cortex_distributor_instance_rejected_requests_total
    • cortex_ingester_instance_rejected_requests_total
  • The CLI flag -validation.create-grace-period is now enforced in the ingester. If you’ve configured -validation.create-grace-period, make sure the configuration is applied to ingesters too.
  • The CLI flag -validation.create-grace-period is now enforced for exemplars. The cortex_discarded_exemplars_total{reason="exemplar_too_far_in_future",user="..."} series is incremented when exemplars are dropped because their timestamp is greater than “now + grace_period”.
  • The CLI flag -validation.create-grace-period is now enforced in the query-frontend even when the configured value is 0. When the value is 0, the query end time range is truncated to the current real-world time.

The following metrics were removed:

  • cortex_ingester_shipper_dir_syncs_total
  • cortex_ingester_shipper_dir_sync_failures_total

The following configuration options are deprecated and will be removed in Grafana Mimir 2.12:

  • The CLI flag -blocks-storage.bucket-store.index-header-lazy-loading-enabled is deprecated, use the new configuration -blocks-storage.bucket-store.index-header.lazy-loading-enabled.
  • The CLI flag -blocks-storage.bucket-store.index-header-lazy-loading-idle-timeout is deprecated, use the new configuration -blocks-storage.bucket-store.index-header.lazy-loading-idle-timeout.
  • The CLI flag -blocks-storage.bucket-store.index-header-lazy-loading-concurrency is deprecated, use the new configuration -blocks-storage.bucket-store.index-header.lazy-loading-concurrency.

The following configuration options that were deprecated in Grafana Mimir 2.8 are removed:

  • The CLI flag blocks-storage.tsdb.max-tsdb-opening-concurrency-on-startup.

The following experimental configuration options were renamed or removed:

  • The CLI flag -querier.prefer-streaming-chunks was renamed to -querier.prefer-streaming-chunks-from-ingesters.
  • The CLI flag -blocks-storage.bucket-store.chunks-cache.fine-grained-chunks-caching-enabled was removed.
  • The CLI flag -blocks-storage.bucket-store.fine-grained-chunks-caching-ranges-per-series was removed.

The following experimental options are now stable:

  • The CLI flag -shutdown-delay.
  • The CLI flag -ingester.ring.excluded-zones.

The following configuration option defaults were changed:

  • The default value for the CLI flag -querier.streaming-chunks-per-ingester-buffer-size was changed from 512 to 256.
  • The default value for gRPC clients connect timeout was set to 5s (default inherited from gRPC client was 20s) with a default max backoff delay of 5s (default inherited from gRPC client was 120s).

Bug fixes

  • Ruler: fixed graceful shutdown for rule evaluations.
  • Ingester: fixed ingesters getting stuck when previous state is LEAVING and the number of tokens has changed upon restarting.
  • Querier: fixed timestamp() function fail with execution: attempted to read series at index 0 from stream, but the stream has already been exhausted if the experimental feature to stream chunks from ingesters to queriers is enabled.
  • Memberlist: brought back memberlist_client_kv_store_count metric that used to exist in Cortex, but got lost during grafana/dskit updates before Mimir 2.0.
  • Store-gateway: fixed an issue where stopping a store-gateway could cause all store-gateways to unload all blocks.
  • Ingester: prevented setting “last update time” of TSDB into the future when opening TSDB. This could prevent detecting of idle TSDB for a long time, if sample in distant future was ingested.
  • General: changed ballast to allocate smaller blocks to avoid problem when entire ballast was kept in memory working set.

2.10.5

  • Security: updated the Alpine base image version to 3.18.5 to address CVE-2023-5363. PR 6897
  • Bugfix: Ingester: fixed possible series matcher corruption leading to wrong series being included in query results. PR 6886

2.10.4

  • [BUGFIX] Update otelhttp library to v0.44.0 as a mitigation for CVE-2023-45142. PR 6634

2.10.3

  • [BUGFIX] Update grpc-go library to 1.57.2-dev that includes a fix for a bug introduced in 1.57.1. PR 6419

2.10.2

Note

This release contains a known performance regression fixed in 2.10.3
  • [BUGFIX] Update grpc-go library to 1.57.1 and golang.org/x/net to 0.17, which include fix for CVE-2023-44487. PR 6349

2.10.1

  • Security: Update Go version to 1.21.3. PR 6244 PR 6325
  • Bugfix: Query-frontend: Don’t retry read requests rejected by the ingester due to utilization based read path limiting. PR 6032
  • Bugfix: Ingester: fix panic in WAL replay of certain native histograms. PR 6086