This is documentation for the next version of Mimir. For the latest stable release, go to the latest version.
Grafana Mimir version 2.3 release notes
Grafana Labs is excited to announce version 2.3 of Grafana Mimir, the most scalable, most performant open source time series database in the world.
The highlights that follow include the top features, enhancements, and bugfixes in this release. For the complete list of changes, see the changelog.
Note: If you are upgrading from Grafana Mimir 2.2, review the list of important changes that follow.
Features and enhancements
Ingest metrics in OpenTelemetry format: This release of Grafana Mimir introduces experimental support for ingesting metrics from the OpenTelemetry Collector’s
otlphttpexporter. This adds a second ingestion option for users of the OTel Collector; Mimir was already compatible with the
prometheusremotewriteexporter. For more information, please see Configure OTel Collector.
Tenant federation for metadata queries: Users with tenant federation enabled could already issue instant queries, range queries, and exemplar queries to multiple tenants at once and receive a single aggregated result. With Grafana Mimir 2.3, we’ve added tenant federation support to the
/api/v1/metadataendpoint as well.
Simpler object storage configuration: Users can now configure block, alertmanager, and ruler storage all at once with the
commonYAML config option key (or
-common.storage.*CLI flags). By centralizing your object storage configuration in one place, this enhancement makes configuration faster and less error prone. Users may still individually configure storage for each of these components if they desire. For more information, see the Common Configurations.
.deb and .rpm packages for Mimir: Starting with version 2.3, we’re publishing .deb and .rpm files for Grafana Mimir, which will make installing and running it on Debian or RedHat-based linux systems much easier. Thank you to community contributor wilfriedroset for your work to implement this!
Import historic data: Users can now backfill time series data from their existing Prometheus or Cortex installation into Mimir using
mimirtool, making it possible to migrate to Grafana Mimir without losing your existing metrics data. This support is still considered experimental and does not yet work for data stored in Thanos. To learn more about this feature, please see
mimirtool backfilland Configure TSDB block upload
Increased instant query performance: Grafana Mimir now supports splitting instant queries by time. This allows it to better parallelize execution of instant queries and therefore return results faster. At present, splitting is only supported for a subset of instant queries, which means not all instant queries will see a speedup. This feature is currently experimental and is disabled by default. It can be enabled with the
split_instant_queries_by_intervalYAML config option in the
limitssection (or the CLI flag
Helm chart improvements
The Mimir Helm chart is the best way to install Mimir on Kubernetes. As part of the Mimir 2.3 release, we’re also releasing version 3.1 of the Mimir Helm chart.
Notable enhancements follow. For the full list of changes, see the Helm chart changelog.
- We’ve upgraded the MinIO subchart dependency from a deprecated chart to the supported one. This creates a breaking change in how the administrator password is set. However, as the built-in MinIO is not a recommended object store for production use cases, this change did not warrant a new major version of the Mimir Helm chart.
- Query sharding is now enabled by default which should give you better performance on high cardinality metrics queries.
- To compensate for the increased number of queries generated by query sharding, the query scheduler component is now enabled by default.
- The backfill API endpoints for importing historic time series data are now exposed on the Nginx gateway.
- Nginx now sets the value of the
X-Scope-OrgIDheader equal to the value of Mimir’s
no_auth_tenantparameter by default. The previous release had set the value of
anonymousby default which complicated the process of migrating to Mimir.
- Memberlist now uses DNS service-discovery by default, which decreases startup time for large Mimir clusters.
In Grafana Mimir 2.3 we have removed the following previously deprecated configuration options:
extend_writesparameter in the distributor YAML configuration and
-distributor.extend-writesCLI flag have been removed.
active_series_custom_trackersparameter has been removed from the YAML configuration. It had already been moved to the runtime configuration. See #1188 for details.
blocks-storage.tsdb.isolation-enabledparameter in the YAML configuration and
-blocks-storage.tsdb.isolation-enabledCLI flag have been removed.
With Grafana Mimir 2.3 we have also updated the default value for the CLI flag
100 to provide Denial-of-Service protection. Previously
-distributor.ha-tracker.max-clusters was unlimited by default which could allow a tenant with HA Dedupe enabled to overload the HA tracker with
__cluster__ label values that could cause the HA Dedupe database to fail.
Also, as noted above, the administrator password for Helm chart deployments using the built-in MinIO is now set differently.
- PR 2447: Fix incorrect mapping of http status codes
500when the request queue is full in the query-frontend. This corrects behavior in the query-frontend where a retryable
429 "Too Many Outstanding Requests"error from a querier was incorrectly returned as an unretryable
- PR 2505: The Memberlist key-value (KV) store now tries to “fast-join” the cluster to avoid serving an empty KV store. This fix addresses the confusing “empty ring” error response and the error log message “ring doesn’t exist in KV store yet” emitted by services when there are other members present in the ring when a service starts. Those using other key-value store options (e.g., consul, etcd) are not impacted by this bug.
- PR 2289: The “List Prometheus rules” API endpoint of the Mimir Ruler component is no longer blocked while rules are being synced. This means users can now list rules while syncing larger rule sets.