<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Manage storage on Grafana Labs</title><link>https://grafana.com/docs/loki/v3.7.x/operations/storage/</link><description>Recent content in Manage storage on Grafana Labs</description><generator>Hugo -- gohugo.io</generator><language>en</language><atom:link href="/docs/loki/v3.7.x/operations/storage/index.xml" rel="self" type="application/rss+xml"/><item><title>Single Store TSDB (tsdb)</title><link>https://grafana.com/docs/loki/v3.7.x/operations/storage/tsdb/</link><pubDate>Thu, 09 Apr 2026 02:28:18 +0000</pubDate><guid>https://grafana.com/docs/loki/v3.7.x/operations/storage/tsdb/</guid><content><![CDATA[&lt;h1 id=&#34;single-store-tsdb-tsdb&#34;&gt;Single Store TSDB (tsdb)&lt;/h1&gt;
&lt;p&gt;Starting with Loki v2.8, TSDB is the recommended Loki index. It is heavily inspired by the Prometheus&amp;rsquo;s TSDB &lt;a href=&#34;https://github.com/prometheus/prometheus/tree/main/tsdb&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;sub-project&lt;/a&gt;. For a deeper explanation you can read Loki maintainer Owen&amp;rsquo;s &lt;a href=&#34;https://www.pikach.us/posts/tsdb/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;blog post&lt;/a&gt;. The short version is that this new index is more efficient, faster, and more scalable. It also resides in object storage like the &lt;a href=&#34;../boltdb-shipper/&#34;&gt;boltdb-shipper&lt;/a&gt; index which preceded it.&lt;/p&gt;
&lt;h2 id=&#34;example-configuration&#34;&gt;Example Configuration&lt;/h2&gt;
&lt;p&gt;To get started using TSDB, add the following configurations to your &lt;code&gt;config.yaml&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;schema_config:
  configs:
    # Old boltdb-shipper schema. Included in example for reference but does not need changing.
    - from: &amp;#34;2023-01-03&amp;#34; # &amp;lt;---- A date in the past
      index:
        period: 24h
        prefix: index_
      object_store: gcs
      schema: v12
      store: boltdb-shipper
    # New TSDB schema below
    - from: &amp;#34;2023-01-05&amp;#34; # &amp;lt;---- A date in the future
      index:
        period: 24h
        prefix: index_
      object_store: gcs
      schema: v13
      store: tsdb

storage_config:
  # Old boltdb-shipper configuration. Included in example for reference but does not need changing.
  boltdb_shipper:
    active_index_directory: /data/index
    build_per_tenant_index: true
    cache_location: /data/boltdb-cache
    index_gateway_client:
      # only applicable if using microservices where index-gateways are independently deployed.
      # This example is using kubernetes-style naming.
      server_address: dns:///index-gateway.&amp;lt;namespace&amp;gt;.svc.cluster.local:9095
  # New tsdb-shipper configuration
  tsdb_shipper:
    active_index_directory: /data/tsdb-index
    cache_location: /data/tsdb-cache
    index_gateway_client:
      # only applicable if using microservices where index-gateways are independently deployed.
      # This example is using kubernetes-style naming.
      server_address: dns:///index-gateway.&amp;lt;namespace&amp;gt;.svc.cluster.local:9095

query_scheduler:
  # the TSDB index dispatches many more, but each individually smaller, requests. 
  # We increase the pending request queue sizes to compensate.
  max_outstanding_requests_per_tenant: 32768

querier:
  # Each `querier` component process runs a number of parallel workers to process queries simultaneously.
  # You may want to adjust this up or down depending on your resource usage
  # (more available cpu and memory can tolerate higher values and vice versa),
  # but we find the most success running at around `16` with tsdb
  max_concurrent: 16&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h2 id=&#34;operations&#34;&gt;Operations&lt;/h2&gt;
&lt;h3 id=&#34;limits&#34;&gt;Limits&lt;/h3&gt;
&lt;p&gt;We&amp;rsquo;ve added a user per-tenant limit called &lt;code&gt;tsdb_max_query_parallelism&lt;/code&gt; in the &lt;code&gt;limits_config&lt;/code&gt;. This functions the same as the prior &lt;code&gt;max_query_parallelism&lt;/code&gt; configuration but applies to tsdb queries instead. Since the TSDB index will create many more smaller queries compared to the other index types before it, we&amp;rsquo;ve added a separate configuration so they can coexist. This is helpful when transitioning between index types. The default parallelism is &lt;code&gt;128&lt;/code&gt; which should work well for most cases, but you can extend it globally in the &lt;code&gt;limits_config&lt;/code&gt; or per-tenant in the &lt;code&gt;overrides&lt;/code&gt; file as needed.&lt;/p&gt;
&lt;h3 id=&#34;dynamic-query-sharding&#34;&gt;Dynamic Query Sharding&lt;/h3&gt;
&lt;p&gt;Previously we would statically shard queries based on the index row shards configured 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#period_config&#34;&gt;here&lt;/a&gt;.
TSDB does Dynamic Query Sharding based on how much data a query is going to be processing.
We additionally store size(KB) and number of lines for each chunk in the TSDB index which is then used by the &lt;a href=&#34;../../../get-started/components/#query-frontend&#34;&gt;Query Frontend&lt;/a&gt; for planning the query.
Based on our experience from operating many Loki clusters, we have configured TSDB to aim for processing 300-600 MBs of data per query shard.
This means with TSDB we will be running more, smaller queries.&lt;/p&gt;
&lt;h3 id=&#34;index-caching-not-required&#34;&gt;Index Caching not required&lt;/h3&gt;
&lt;p&gt;TSDB is a compact and optimized format. Loki does not currently use an index cache for TSDB. If you are already using Loki with other index types, it is recommended to keep the index caching until all of your existing data falls out of 
    &lt;a href=&#34;/docs/loki/v3.7.x/operations/storage/retention/&#34;&gt;retention&lt;/a&gt;) or your configured &lt;code&gt;max_query_lookback&lt;/code&gt; under 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#limits_config&#34;&gt;limits_config&lt;/a&gt;. After that, we suggest running without an index cache (it isn&amp;rsquo;t used in TSDB).&lt;/p&gt;
]]></content><description>&lt;h1 id="single-store-tsdb-tsdb">Single Store TSDB (tsdb)&lt;/h1>
&lt;p>Starting with Loki v2.8, TSDB is the recommended Loki index. It is heavily inspired by the Prometheus&amp;rsquo;s TSDB &lt;a href="https://github.com/prometheus/prometheus/tree/main/tsdb" target="_blank" rel="noopener noreferrer">sub-project&lt;/a>. For a deeper explanation you can read Loki maintainer Owen&amp;rsquo;s &lt;a href="https://www.pikach.us/posts/tsdb/" target="_blank" rel="noopener noreferrer">blog post&lt;/a>. The short version is that this new index is more efficient, faster, and more scalable. It also resides in object storage like the &lt;a href="../boltdb-shipper/">boltdb-shipper&lt;/a> index which preceded it.&lt;/p></description></item><item><title>Single Store BoltDB (boltdb-shipper)</title><link>https://grafana.com/docs/loki/v3.7.x/operations/storage/boltdb-shipper/</link><pubDate>Thu, 09 Apr 2026 02:28:18 +0000</pubDate><guid>https://grafana.com/docs/loki/v3.7.x/operations/storage/boltdb-shipper/</guid><content><![CDATA[&lt;h1 id=&#34;single-store-boltdb-boltdb-shipper&#34;&gt;Single Store BoltDB (boltdb-shipper)&lt;/h1&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Single store BoltDB Shipper is a legacy storage option recommended for Loki 2.0 through 2.7.x and is not recommended for new deployments. The 
    &lt;a href=&#34;/docs/loki/v3.7.x/operations/storage/tsdb/&#34;&gt;TSDB&lt;/a&gt; is the recommended index for Loki 2.8 and newer.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;BoltDB Shipper lets you run Grafana Loki without any dependency on NoSQL stores for storing index.
It locally stores the index in BoltDB files instead and keeps shipping those files to a shared object store i.e the same object store which is being used for storing chunks.
It also keeps syncing BoltDB files from shared object store to a configured local directory for getting index entries created by other services of same Loki cluster.
This helps run Loki with one less dependency and also saves costs in storage since object stores are likely to be much cheaper compared to cost of a hosted NoSQL store or running a self hosted instance of Cassandra.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;BoltDB shipper works best with 24h periodic index files. It is a requirement to have the index period set to 24h for either active or upcoming usage of boltdb-shipper.
If boltdb-shipper already has created index files with 7 days period, and you want to retain previous data, add a new schema config using boltdb-shipper with a future date and index files period set to 24h.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;h2 id=&#34;example-configuration&#34;&gt;Example Configuration&lt;/h2&gt;
&lt;p&gt;Example configuration with GCS:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;schema_config:
  configs:
    - from: 2018-04-15
      store: boltdb-shipper
      object_store: gcs
      schema: v11
      index:
        prefix: loki_index_
        period: 24h

storage_config:
  gcs:
    bucket_name: GCS_BUCKET_NAME

  boltdb_shipper:
    active_index_directory: /loki/index
    cache_location: /loki/boltdb-cache&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This would run Loki with BoltDB Shipper storing BoltDB files locally at &lt;code&gt;/loki/index&lt;/code&gt; and chunks at configured &lt;code&gt;GCS_BUCKET_NAME&lt;/code&gt;.
It would also keep shipping BoltDB files periodically to same configured bucket.
It would also keep downloading BoltDB files from shared bucket uploaded by other ingesters to &lt;code&gt;/loki/boltdb-cache&lt;/code&gt; folder locally.&lt;/p&gt;
&lt;h2 id=&#34;operational-details&#34;&gt;Operational Details&lt;/h2&gt;
&lt;p&gt;Loki can be configured to run as just a single vertically scaled instance or as a cluster of horizontally scaled single binary(running all Loki services) instances or in micro-services mode running just one of the services in each instance.
When it comes to reads and writes, Ingesters are the ones which writes the index and chunks to stores and Queriers are the ones which reads index and chunks from the store for serving requests.&lt;/p&gt;
&lt;p&gt;Before we get into more details, it is important to understand how Loki manages index in stores. Loki shards index as per configured period which defaults to seven days i.e when it comes to table based stores like Bigtable/Cassandra/DynamoDB there would be separate table per week containing index for that week.
In the case of BoltDB Shipper, a table is defined by a collection of many smaller BoltDB files, each file storing just 15 mins worth of index. Tables created per day are identified by a configured &lt;code&gt;prefix_&lt;/code&gt; &#43; &lt;code&gt;&amp;lt;period-number-since-epoch&amp;gt;&lt;/code&gt;.
Here &lt;code&gt;&amp;lt;period-number-since-epoch&amp;gt;&lt;/code&gt; in case of boltdb-shipper would be day number since epoch.
For example, if you have a prefix set to &lt;code&gt;loki_index_&lt;/code&gt; and a write request comes in on 20th April 2020, it would be stored in a table named loki_index_18372 because it has been &lt;code&gt;18371&lt;/code&gt; days since the epoch, and we are in &lt;code&gt;18372&lt;/code&gt;th day.
Since sharding of index creates multiple files when using BoltDB, BoltDB Shipper would create a folder per day and add files for that day in that folder and names those files after ingesters which created them.&lt;/p&gt;
&lt;p&gt;To reduce the size of files which help with faster transfer speeds and reduced storage costs, they are stored after compressing them with gzip.&lt;/p&gt;
&lt;p&gt;To show how BoltDB files in shared object store would look like, let us consider 2 ingesters named &lt;code&gt;ingester-0&lt;/code&gt; and &lt;code&gt;ingester-1&lt;/code&gt; running in a Loki cluster, and
they both having shipped files for day &lt;code&gt;18371&lt;/code&gt; and &lt;code&gt;18372&lt;/code&gt; with prefix &lt;code&gt;loki_index_&lt;/code&gt;, here is how the files would look like:&lt;/p&gt;

&lt;div class=&#34;code-snippet code-snippet__mini&#34;&gt;&lt;div class=&#34;lang-toolbar__mini&#34;&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet code-snippet__border&#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-none&#34;&gt;└── index
    ├── loki_index_18371
    │   ├── ingester-0-1587254400.gz
    │   └── ingester-1-1587255300.gz
    |   ...
    └── loki_index_18372
        ├── ingester-0-1587254400.gz
        └── ingester-1-1587254400.gz
        ...&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Loki also adds a timestamp to names of the files to randomize the names to avoid overwriting files when running Ingesters with same name and not have a persistent storage. Timestamps not shown here for simplification.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;Let us talk about more in depth about how both Ingesters and Queriers work when running them with BoltDB Shipper.&lt;/p&gt;
&lt;h3 id=&#34;ingesters&#34;&gt;Ingesters&lt;/h3&gt;
&lt;p&gt;Ingesters write the index to BoltDB files in &lt;code&gt;active_index_directory&lt;/code&gt;,
and the BoltDB Shipper looks for new and updated files in that directory at 1 minute intervals, to upload them to the shared object store.
When running Loki in microservices mode, there could be multiple ingesters serving write requests.
Each ingester generates BoltDB files locally.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;To avoid any loss of index when an ingester crashes, we recommend running ingesters as a StatefulSet (when using Kubernetes) with a persistent storage for storing index files.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;When chunks are flushed, they are available for reads in the object store instantly. The index is not available instantly, since we upload every 15 minutes with the BoltDB shipper.
Ingesters expose a new RPC for letting queriers query the ingester&amp;rsquo;s local index for chunks which were recently flushed, but its index might not be available yet with queriers.
For all the queries which require chunks to be read from the store, queriers also query ingesters over RPC for IDs of chunks which were recently flushed.
This avoids missing any logs from queries.&lt;/p&gt;
&lt;h3 id=&#34;queriers&#34;&gt;Queriers&lt;/h3&gt;
&lt;p&gt;To avoid running Queriers as a StatefulSet with persistent storage, we recommend running an Index Gateway. An Index Gateway will download and synchronize the index, and it will serve it over gRPC to Queriers and Rulers.&lt;/p&gt;
&lt;p&gt;Queriers lazily loads BoltDB files from shared object store to configured &lt;code&gt;cache_location&lt;/code&gt;.
When a querier receives a read request, the query range from the request is resolved to period numbers and all the files for those period numbers are downloaded to &lt;code&gt;cache_location&lt;/code&gt;, if not already.
Once we have downloaded files for a period we keep looking for updates in shared object store and download them every 5 Minutes by default.
Frequency for checking updates can be configured with &lt;code&gt;resync_interval&lt;/code&gt; config.&lt;/p&gt;
&lt;p&gt;To avoid keeping downloaded index files forever there is a ttl for them which defaults to 24 hours, which means if index files for a period are not used for 24 hours they would be removed from cache location.
ttl can be configured using &lt;code&gt;cache_ttl&lt;/code&gt; config.&lt;/p&gt;
&lt;p&gt;Within Kubernetes, if you are not using an Index Gateway, we recommend running Queriers as a StatefulSet with persistent storage for downloading and querying index files. This will obtain better read performance, and it will avoid using node disk.&lt;/p&gt;
&lt;h3 id=&#34;index-gateway&#34;&gt;Index Gateway&lt;/h3&gt;
&lt;p&gt;An Index Gateway downloads and synchronizes the BoltDB index from the Object Storage in order to serve index queries to the Queriers and Rulers over gRPC.
This avoids running Queriers and Rulers with a disk for persistence. Disks can become costly in a big cluster.&lt;/p&gt;
&lt;p&gt;To run an Index Gateway, configure 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#storage_config&#34;&gt;StorageConfig&lt;/a&gt; and set the &lt;code&gt;-target&lt;/code&gt; CLI flag to &lt;code&gt;index-gateway&lt;/code&gt;.
To connect Queriers and Rulers to the Index Gateway, set the address (with gRPC port) of the Index Gateway with the &lt;code&gt;-boltdb.shipper.index-gateway-client.server-address&lt;/code&gt; CLI flag or its equivalent YAML value under 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#storage_config&#34;&gt;StorageConfig&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;When using the Index Gateway within Kubernetes, we recommend using a StatefulSet with persistent storage for downloading and querying index files. This can obtain better read performance, avoids &lt;a href=&#34;https://en.wikipedia.org/wiki/Cloud_computing_issues#Performance_interference_and_noisy_neighbors&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;noisy neighbor problems&lt;/a&gt; by not using the node disk, and avoids the time consuming index downloading step on startup after rescheduling to a new node.&lt;/p&gt;
&lt;h3 id=&#34;write-deduplication-disabled&#34;&gt;Write Deduplication disabled&lt;/h3&gt;
&lt;p&gt;Loki does write deduplication of chunks and index using Chunks and WriteDedupe cache respectively, configured with 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#chunk_store_config&#34;&gt;ChunkStoreConfig&lt;/a&gt;.
The problem with write deduplication when using &lt;code&gt;boltdb-shipper&lt;/code&gt; though is ingesters only keep uploading boltdb files periodically to make them available to all the other services which means there would be a brief period where some of the services would not have received updated index yet.
The problem due to that is if an ingester which first wrote the chunks and index goes down and all the other ingesters which were part of replication scheme skipped writing those chunks and index due to deduplication, we would end up missing those logs from query responses since only the ingester which had the index went down.
This problem would be faced even during rollouts which is quite common.&lt;/p&gt;
&lt;p&gt;To avoid this, Loki disables deduplication of index when the replication factor is greater than 1 and &lt;code&gt;boltdb-shipper&lt;/code&gt; is an active or upcoming index type.
While using &lt;code&gt;boltdb-shipper&lt;/code&gt; avoid configuring WriteDedupe cache since it is used purely for the index deduplication, so it would not be used anyways.&lt;/p&gt;
&lt;h3 id=&#34;compactor&#34;&gt;Compactor&lt;/h3&gt;
&lt;p&gt;Compactor is a BoltDB Shipper specific service that reduces the index size by deduping the index and merging all the files to a single file per table.
We recommend running a Compactor since a single Ingester creates 96 files per day which include a lot of duplicate index entries and querying multiple files per table adds up the overall query latency.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;There should be only one compactor instance running at a time that otherwise could create problems and may lead to data loss.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;Example compactor configuration with GCS:&lt;/p&gt;
&lt;h4 id=&#34;delete-permissions&#34;&gt;Delete Permissions&lt;/h4&gt;
&lt;p&gt;The compactor is an optional but suggested component that combines and deduplicates the boltdb-shipper index files. When compacting index files, the compactor writes a new file and deletes unoptimized files. Ensure that the compactor has appropriate permissions for deleting files, for example, s3:DeleteObject permission for AWS S3.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;compactor:
  working_directory: /loki/compactor

storage_config:
  gcs:
    bucket_name: GCS_BUCKET_NAME&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
]]></content><description>&lt;h1 id="single-store-boltdb-boltdb-shipper">Single Store BoltDB (boltdb-shipper)&lt;/h1>
&lt;div class="admonition admonition-note">&lt;blockquote>&lt;p class="title text-uppercase">Note&lt;/p>&lt;p>Single store BoltDB Shipper is a legacy storage option recommended for Loki 2.0 through 2.7.x and is not recommended for new deployments. The
&lt;a href="/docs/loki/v3.7.x/operations/storage/tsdb/">TSDB&lt;/a> is the recommended index for Loki 2.8 and newer.&lt;/p></description></item><item><title>Filesystem object store</title><link>https://grafana.com/docs/loki/v3.7.x/operations/storage/filesystem/</link><pubDate>Thu, 09 Apr 2026 02:28:18 +0000</pubDate><guid>https://grafana.com/docs/loki/v3.7.x/operations/storage/filesystem/</guid><content><![CDATA[&lt;h1 id=&#34;filesystem-object-store&#34;&gt;Filesystem object store&lt;/h1&gt;
&lt;p&gt;The filesystem object store is the easiest to get started with Grafana Loki but there are some pros/cons to this approach.&lt;/p&gt;
&lt;p&gt;Very simply it stores all the objects (chunks) in the specified directory:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;storage_config:
  filesystem:
    directory: /tmp/loki/&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;A folder is created for every tenant all the chunks for one tenant are stored in that directory.&lt;/p&gt;
&lt;p&gt;If Loki is run in single-tenant mode, all the chunks are put in a folder named &lt;code&gt;fake&lt;/code&gt; which is the synthesized tenant name used for single tenant mode.&lt;/p&gt;
&lt;p&gt;See 
    &lt;a href=&#34;/docs/loki/v3.7.x/operations/multi-tenancy/&#34;&gt;multi-tenancy&lt;/a&gt; for more information.&lt;/p&gt;
&lt;h2 id=&#34;pros&#34;&gt;Pros&lt;/h2&gt;
&lt;p&gt;Very simple, no additional software required to use Loki when paired with the BoltDB index store.&lt;/p&gt;
&lt;p&gt;Great for low volume applications, proof of concepts, and just playing around with Loki.&lt;/p&gt;
&lt;h2 id=&#34;cons&#34;&gt;Cons&lt;/h2&gt;
&lt;p&gt;The filesystem is not supported by Grafana Labs for production environments (for those customers who have purchased a support contract).&lt;/p&gt;
&lt;h3 id=&#34;scaling&#34;&gt;Scaling&lt;/h3&gt;
&lt;p&gt;At some point there is a limit to how many chunks can be stored in a single directory, for example see &lt;a href=&#34;https://github.com/grafana/loki/issues/1502&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;issue #1502&lt;/a&gt; which explains how a Loki user ran into a strange error with about &lt;strong&gt;5.5 million chunk files&lt;/strong&gt; in their file store (and also a workaround for the problem).&lt;/p&gt;
&lt;p&gt;However, if you keep your streams low (remember loki writes a chunk per stream) and use configs like &lt;code&gt;chunk_target_size&lt;/code&gt; (around 1MB), &lt;code&gt;max_chunk_age&lt;/code&gt; (increase beyond 1h), &lt;code&gt;chunk_idle_period&lt;/code&gt; (increase to match &lt;code&gt;max_chunk_age&lt;/code&gt;) can be tweaked to reduce the number of chunks flushed (although they will trade for more memory consumption).&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s still very possible to store terabytes of log data with the filestore, but realize there are limitations to how many files a filesystem will want to store in a single directory.&lt;/p&gt;
&lt;h3 id=&#34;durability&#34;&gt;Durability&lt;/h3&gt;
&lt;p&gt;The durability of the objects is at the mercy of the filesystem itself where other object stores like S3/GCS do a lot behind the scenes to offer extremely high durability to your data.&lt;/p&gt;
&lt;h3 id=&#34;high-availability&#34;&gt;High Availability&lt;/h3&gt;
&lt;p&gt;Running Loki clustered is not possible with the filesystem store unless the filesystem is shared in some fashion (NFS for example). However using shared filesystems is likely going to be a bad experience with Loki just as it is for almost every other application.&lt;/p&gt;
]]></content><description>&lt;h1 id="filesystem-object-store">Filesystem object store&lt;/h1>
&lt;p>The filesystem object store is the easiest to get started with Grafana Loki but there are some pros/cons to this approach.&lt;/p>
&lt;p>Very simply it stores all the objects (chunks) in the specified directory:&lt;/p></description></item><item><title>Storage schema</title><link>https://grafana.com/docs/loki/v3.7.x/operations/storage/schema/</link><pubDate>Thu, 09 Apr 2026 02:28:18 +0000</pubDate><guid>https://grafana.com/docs/loki/v3.7.x/operations/storage/schema/</guid><content><![CDATA[&lt;h1 id=&#34;storage-schema&#34;&gt;Storage schema&lt;/h1&gt;
&lt;p&gt;To support iterations over the storage layer contents, Loki has a configurable storage schema. The schema is defined to apply over periods of time. A &lt;code&gt;from&lt;/code&gt; value marks the starting point of that schema. The schema is active until another entry defines a new schema with a new &lt;code&gt;from&lt;/code&gt; date.&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;./schema.png&#34;
  alt=&#34;schema_example&#34;/&gt;&lt;/p&gt;
&lt;p&gt;Loki uses the defined schemas to determine which format to use when storing and querying the data.&lt;/p&gt;
&lt;p&gt;Use of a schema allows Loki to iterate over the storage layer without requiring migration of existing data.&lt;/p&gt;
&lt;h2 id=&#34;new-loki-installs&#34;&gt;New Loki installs&lt;/h2&gt;
&lt;p&gt;For a new Loki install with no previous data, here is an example schema configuration with recommended values&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;schema_config:
  configs:
    - from: 2024-04-01
      object_store: s3
      store: tsdb
      schema: v13
      index:
        prefix: index_
        period: 24h&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Property&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;from&lt;/td&gt;
              &lt;td&gt;for a new install, this must be a date in the past, use a recent date. Format is YYYY-MM-DD.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;object_store&lt;/td&gt;
              &lt;td&gt;s3, azure, gcs, alibabacloud, bos, cos, swift, filesystem, or a named_store (see 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#storage_config&#34;&gt;StorageConfig&lt;/a&gt;).&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;store&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;tsdb&lt;/code&gt; is the current and only recommended value for store.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;schema&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;v13&lt;/code&gt; is the most recent schema and recommended value.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;prefix:&lt;/td&gt;
              &lt;td&gt;any value without spaces is acceptable.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;period:&lt;/td&gt;
              &lt;td&gt;must be &lt;code&gt;24h&lt;/code&gt;.&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;

&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;For a new install, the &lt;code&gt;from&lt;/code&gt; date must be in the past so the schema is immediately active when Loki starts. If you set it to a future date, Loki will have no valid schema for the current time and will not be able to store incoming data.&lt;/p&gt;
&lt;p&gt;This is different from adding a new schema entry to an existing install, where the &lt;code&gt;from&lt;/code&gt; date must be in the future. See &lt;a href=&#34;#changing-the-schema&#34;&gt;Changing the schema&lt;/a&gt; below.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;h2 id=&#34;changing-the-schema&#34;&gt;Changing the schema&lt;/h2&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;The guidance in this section applies when you are adding a new schema entry to an existing Loki install that already has data. Setting the &lt;code&gt;from&lt;/code&gt; date to a future date gives Loki time to transition to the new schema and ensures that existing data continues to be read using the old schema. If the &lt;code&gt;from&lt;/code&gt; date is not in the future, data written just before the cutover may become unreadable because Loki would try to query it using the wrong schema.&lt;/p&gt;
&lt;p&gt;For a brand new install with no previous data, the &lt;code&gt;from&lt;/code&gt; date should be in the past instead. See &lt;a href=&#34;#new-loki-installs&#34;&gt;New Loki installs&lt;/a&gt; above.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;Here are items to consider when changing the schema; if schema changes are not done properly, a scenario can be created which prevents data from being read.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Always set the &lt;code&gt;from&lt;/code&gt; date in the new schema to a date in the future.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;from&lt;/code&gt; date is interpreted by Loki to start at 00:00:00 UTC. Therefore, Loki must have a date in the future to be able to transition to the new schema when that date and time arrives.&lt;/p&gt;
&lt;p&gt;Be aware of your relation to UTC when using the current date. Make sure that UTC 00:00:00 has not already passed for your current date.&lt;/p&gt;
&lt;p&gt;As an example, assume that the current date is 2022-04-10, and you want to update to the v13 schema, so you restart Loki with 2022-04-11 as the &lt;code&gt;from&lt;/code&gt; date for the new schema. If you forget to take into account that your timezone is UTC -5:00 and it’s currently 20:00 hours in your local timezone,  that is actually 2022-04-11T01:00:00 UTC. When Loki starts it will see the new schema and begin to write and store objects following that new schema. If you then try to query data that was written between 00:00:00 and 01:00:00 UTC, Loki will use the new schema and the data will be unreadable, because it was created with the previous schema.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You cannot undo or roll back a schema change.&lt;/p&gt;
&lt;p&gt;Any data written with an active schema can only be read by that schema. If you wish to return to the previous schema; you can add another new entry with the previous schema settings.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;schema-configuration-example&#34;&gt;Schema configuration example&lt;/h2&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;schema_config:
  configs:
    - from: &amp;#34;2020-07-31&amp;#34;
      index:
        period: 24h
        prefix: loki_ops_index_
      object_store: gcs
      schema: v11
      store: tsdb
    - from: &amp;#34;2022-01-20&amp;#34;
      index:
        period: 24h
        prefix: loki_ops_index_
      object_store: gcs
      schema: v13
      store: tsdb&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
]]></content><description>&lt;h1 id="storage-schema">Storage schema&lt;/h1>
&lt;p>To support iterations over the storage layer contents, Loki has a configurable storage schema. The schema is defined to apply over periods of time. A &lt;code>from&lt;/code> value marks the starting point of that schema. The schema is active until another entry defines a new schema with a new &lt;code>from&lt;/code> date.&lt;/p></description></item><item><title>Write Ahead Log</title><link>https://grafana.com/docs/loki/v3.7.x/operations/storage/wal/</link><pubDate>Thu, 09 Apr 2026 02:28:18 +0000</pubDate><guid>https://grafana.com/docs/loki/v3.7.x/operations/storage/wal/</guid><content><![CDATA[&lt;h1 id=&#34;write-ahead-log&#34;&gt;Write Ahead Log&lt;/h1&gt;
&lt;p&gt;Ingesters temporarily store data in memory. In the event of a crash, there could be data loss. The Write Ahead Log (WAL) helps fill this gap in reliability.&lt;/p&gt;
&lt;p&gt;The WAL in Grafana Loki records incoming data and stores it on the local file system in order to guarantee persistence of acknowledged data in the event of a process crash. Upon restart, Loki will &amp;ldquo;replay&amp;rdquo; all of the data in the log before registering itself as ready for subsequent writes. This allows Loki to maintain the performance and cost benefits of buffering data in memory &lt;em&gt;and&lt;/em&gt; durability benefits (it won&amp;rsquo;t lose data once a write has been acknowledged).&lt;/p&gt;
&lt;p&gt;This section will use Kubernetes as a reference deployment paradigm in the examples.&lt;/p&gt;
&lt;h2 id=&#34;disclaimer-and-wal-nuances&#34;&gt;Disclaimer and WAL nuances&lt;/h2&gt;
&lt;p&gt;The Write Ahead Log in Loki takes a few particular tradeoffs compared to other WALs you may be familiar with. The WAL aims to add additional durability guarantees, but &lt;em&gt;not at the expense of availability&lt;/em&gt;. Particularly, there are two scenarios where the WAL sacrifices these guarantees.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Corruption/Deletion of the WAL prior to replaying it&lt;/p&gt;
&lt;p&gt;In the event the WAL is corrupted/partially deleted, Loki will not be able to recover all of its data. In this case, Loki will attempt to recover any data it can, but will not prevent Loki from starting.&lt;/p&gt;
&lt;p&gt;You can use the Prometheus metric &lt;code&gt;loki_ingester_wal_corruptions_total&lt;/code&gt; to track and alert when this happens.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;No space left on disk&lt;/p&gt;
&lt;p&gt;In the event the underlying WAL disk is full, Loki will not fail incoming writes, but neither will it log them to the WAL. In this case, the persistence guarantees across process restarts will not hold.&lt;/p&gt;
&lt;p&gt;You can use the Prometheus metric &lt;code&gt;loki_ingester_wal_disk_full_failures_total&lt;/code&gt; to track and alert when this happens.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;backpressure&#34;&gt;Backpressure&lt;/h3&gt;
&lt;p&gt;The WAL also includes a backpressure mechanism to allow a large WAL to be replayed within a smaller memory bound. This is helpful after bad scenarios (i.e. an outage) when a WAL has grown past the point it may be recovered in memory. In this case, the ingester will track the amount of data being replayed and once it&amp;rsquo;s passed the &lt;code&gt;ingester.wal-replay-memory-ceiling&lt;/code&gt; threshold, will flush to storage. When this happens, it&amp;rsquo;s likely that the Loki attempt to deduplicate chunks via content addressable storage will suffer. We deemed this efficiency loss an acceptable tradeoff considering how it simplifies operation and that it should not occur during regular operation (rollouts, rescheduling) where the WAL can be replayed without triggering this threshold.&lt;/p&gt;
&lt;h3 id=&#34;metrics&#34;&gt;Metrics&lt;/h3&gt;
&lt;p&gt;The following metrics are available for monitoring the WAL:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;loki_ingester_wal_corruptions_total&lt;/code&gt;: Total number of WAL corruptions encountered&lt;/li&gt;
&lt;li&gt;&lt;code&gt;loki_ingester_wal_disk_full_failures_total&lt;/code&gt;: Total number of disk full failures&lt;/li&gt;
&lt;li&gt;&lt;code&gt;loki_ingester_wal_records_logged&lt;/code&gt;: Counter for WAL records logged&lt;/li&gt;
&lt;li&gt;&lt;code&gt;loki_ingester_wal_logged_bytes_total&lt;/code&gt;: Total bytes written to WAL&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;changes-to-deployment&#34;&gt;Changes to deployment&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Since ingesters need to have the same persistent volume across restarts/rollout, all the ingesters should be run on &lt;a href=&#34;https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;StatefulSet&lt;/a&gt; with fixed volumes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Following flags needs to be set&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;--ingester.wal-enabled&lt;/code&gt; to &lt;code&gt;true&lt;/code&gt; which enables writing to WAL during ingestion.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;--ingester.wal-dir&lt;/code&gt; to the directory where the WAL data should be stored and/or recovered from. Note that this should be on the mounted volume.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;--ingester.checkpoint-duration&lt;/code&gt; to the interval at which checkpoints should be created.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;--ingester.wal-replay-memory-ceiling&lt;/code&gt; (default 4GB) may be set higher/lower depending on your resource settings. It handles memory pressure during WAL replays, allowing a WAL many times larger than available memory to be replayed. This is provided to minimize reconciliation time after very bad situations, i.e. an outage, and will likely not impact regular operations/rollouts &lt;em&gt;at all&lt;/em&gt;. We suggest setting this to a high percentage (~75%) of available memory.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;changes-in-lifecycle-when-wal-is-enabled&#34;&gt;Changes in lifecycle when WAL is enabled&lt;/h2&gt;
&lt;p&gt;Flushing of data to chunk store during rollouts or scale down is disabled. This is because during a rollout of statefulset there are no ingesters that are simultaneously leaving and joining, rather the same ingester is shut down and brought back again with updated config. Hence flushing is skipped and the data is recovered from the WAL. If you need to ensure that data is always flushed to the chunk store when your pod shuts down, you can set the &lt;code&gt;--ingester.flush-on-shutdown&lt;/code&gt; flag to &lt;code&gt;true&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;disk-space-requirements&#34;&gt;Disk space requirements&lt;/h2&gt;
&lt;p&gt;Based on tests in real world:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Numbers from an ingester with 5000 series ingesting ~5mb/s.&lt;/li&gt;
&lt;li&gt;Checkpoint period was 5mins.&lt;/li&gt;
&lt;li&gt;disk utilization on a WAL-only disk was steady at ~10-15GB.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You should not target 100% disk utilization.&lt;/p&gt;
&lt;h2 id=&#34;migrating-from-stateless-deployments&#34;&gt;Migrating from stateless deployments&lt;/h2&gt;
&lt;p&gt;The ingester &lt;em&gt;Deployment without WAL&lt;/em&gt; and &lt;em&gt;StatefulSet with WAL&lt;/em&gt; should be scaled down and up respectively in sync without transfer of data between them to ensure that any ingestion after migration is reliable immediately.&lt;/p&gt;
&lt;p&gt;Let&amp;rsquo;s take an example of 4 ingesters. The migration would look something like this:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Bring up one stateful ingester &lt;code&gt;ingester-0&lt;/code&gt; and wait until it&amp;rsquo;s ready (accepting read and write requests).&lt;/li&gt;
&lt;li&gt;Scale down the old ingester deployment to 3 and wait until the leaving ingester flushes all the data to chunk store.&lt;/li&gt;
&lt;li&gt;Once that ingester has disappeared from &lt;code&gt;kc get pods ...&lt;/code&gt;, add another stateful ingester and wait until it&amp;rsquo;s ready. Now you have &lt;code&gt;ingester-0&lt;/code&gt; and &lt;code&gt;ingester-1&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Repeat step 2 to reduce remove another ingester from old deployment.&lt;/li&gt;
&lt;li&gt;Repeat step 3 to add another stateful ingester. Now you have &lt;code&gt;ingester-0 ingester-1 ingester-2&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Repeat step 4 and 5, and now you will finally have &lt;code&gt;ingester-0 ingester-1 ingester-2 ingester-3&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;how-to-scale-updown&#34;&gt;How to scale up/down&lt;/h2&gt;
&lt;h3 id=&#34;scale-up&#34;&gt;Scale up&lt;/h3&gt;
&lt;p&gt;Scaling up is same as what you would do without WAL or StatefulSets. Nothing to change here.&lt;/p&gt;
&lt;h3 id=&#34;scale-down&#34;&gt;Scale down&lt;/h3&gt;
&lt;p&gt;When scaling down, we must ensure existing data on the leaving ingesters are flushed to storage instead of just the WAL. This is because we won&amp;rsquo;t be replaying the WAL on an ingester that will no longer exist and we need to make sure the data is not orphaned.&lt;/p&gt;
&lt;p&gt;Consider you have 4 ingesters &lt;code&gt;ingester-0 ingester-1 ingester-2 ingester-3&lt;/code&gt; and you want to scale down to 2 ingesters, the ingesters which will be shut down according to StatefulSet rules are &lt;code&gt;ingester-3&lt;/code&gt; and then &lt;code&gt;ingester-2&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Hence before actually scaling down in Kubernetes, port forward those ingesters and hit the 
    &lt;a href=&#34;/docs/loki/v3.7.x/reference/loki-http-api/#flush-in-memory-chunks-and-shut-down&#34;&gt;&lt;code&gt;/ingester/shutdown?flush=true&lt;/code&gt;&lt;/a&gt; endpoint. This will flush the chunks and remove itself from the ring, after which it will register as unready and may be deleted.&lt;/p&gt;
&lt;p&gt;After hitting the endpoint for &lt;code&gt;ingester-2 ingester-3&lt;/code&gt;, scale down the ingesters to 2.&lt;/p&gt;
&lt;p&gt;Also you can set the &lt;code&gt;--ingester.flush-on-shutdown&lt;/code&gt; flag to &lt;code&gt;true&lt;/code&gt;. This enables chunks to be flushed to long-term storage when the ingester is shut down.&lt;/p&gt;
&lt;h2 id=&#34;additional-notes&#34;&gt;Additional notes&lt;/h2&gt;
&lt;h3 id=&#34;kubernetes-hacking&#34;&gt;Kubernetes hacking&lt;/h3&gt;
&lt;p&gt;StatefulSets are significantly more cumbersome to work with, upgrade, and so on. Much of this stems from immutable fields on the specification. For example, if one wants to start using the WAL with single store Loki and wants separate volume mounts for the WAL and the boltdb-shipper, you may see immutability errors when attempting updates the Kubernetes StatefulSets.&lt;/p&gt;
&lt;p&gt;In this case, try &lt;code&gt;kubectl -n &amp;lt;namespace&amp;gt; delete sts ingester --cascade=false&lt;/code&gt;.
This will leave the Pods alive but delete the StatefulSet.
Then you may recreate the (updated) StatefulSet and one-by-one start deleting the &lt;code&gt;ingester-0&lt;/code&gt; through &lt;code&gt;ingester-n&lt;/code&gt; Pods &lt;em&gt;in that order&lt;/em&gt;, allowing the StatefulSet to spin up new pods to replace them.&lt;/p&gt;
&lt;h4 id=&#34;scaling-down-using-flush_shutdown-endpoint-and-lifecycle-hook&#34;&gt;Scaling Down Using &lt;code&gt;/flush_shutdown&lt;/code&gt; Endpoint and Lifecycle Hook&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;StatefulSets for Ordered Scaling Down&lt;/strong&gt;: The Loki ingesters should be scaled down one by one, which is efficiently handled by Kubernetes StatefulSets. This ensures an ordered and reliable scaling process, as described in the &lt;a href=&#34;https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Deployment and Scaling Guarantees&lt;/a&gt; documentation.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Using PreStop Lifecycle Hook&lt;/strong&gt;: During the Pod scaling down process, the PreStop &lt;a href=&#34;https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;lifecycle hook&lt;/a&gt; triggers the &lt;code&gt;/flush_shutdown&lt;/code&gt; endpoint on the ingester. This action flushes the chunks and removes the ingester from the ring, allowing it to register as unready and become eligible for deletion.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Using terminationGracePeriodSeconds&lt;/strong&gt;: Provides time for the ingester to flush its data before being deleted, if flushing data takes more than 30 minutes, you may need to increase it.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Cleaning Persistent Volumes&lt;/strong&gt;: Persistent volumes are automatically cleaned up by leveraging the &lt;a href=&#34;https://kubernetes.io/blog/2021/12/16/kubernetes-1-23-statefulset-pvc-auto-deletion/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;enableStatefulSetAutoDeletePVC&lt;/a&gt; feature in Kubernetes.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;By following the above steps, you can ensure a smooth scaling down process for the Loki ingesters while maintaining data integrity and minimizing potential disruptions.&lt;/p&gt;
&lt;h3 id=&#34;non-kubernetes-or-baremetal-deployments&#34;&gt;Non-Kubernetes or baremetal deployments&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;When the ingester restarts for any reason (upgrade, crash, etc), it should be able to attach to the same volume in order to recover back the WAL and tokens.&lt;/li&gt;
&lt;li&gt;2 ingesters should not be working with the same volume/directory for the WAL.&lt;/li&gt;
&lt;li&gt;A rollout should bring down an ingester completely and then start the new ingester, not the other way around.&lt;/li&gt;
&lt;/ul&gt;
]]></content><description>&lt;h1 id="write-ahead-log">Write Ahead Log&lt;/h1>
&lt;p>Ingesters temporarily store data in memory. In the event of a crash, there could be data loss. The Write Ahead Log (WAL) helps fill this gap in reliability.&lt;/p></description></item><item><title>Log retention</title><link>https://grafana.com/docs/loki/v3.7.x/operations/storage/retention/</link><pubDate>Thu, 09 Apr 2026 02:28:18 +0000</pubDate><guid>https://grafana.com/docs/loki/v3.7.x/operations/storage/retention/</guid><content><![CDATA[&lt;h1 id=&#34;log-retention&#34;&gt;Log retention&lt;/h1&gt;
&lt;p&gt;Retention in Grafana Loki is achieved through the &lt;a href=&#34;#compactor&#34;&gt;Compactor&lt;/a&gt;.
By default the &lt;code&gt;compactor.retention-enabled&lt;/code&gt; flag is not set, so the logs sent to Loki live forever.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;If you have a lifecycle policy configured on the object store, please ensure that it is longer than the retention period.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;Granular retention policies to apply retention at per tenant or per stream level are also supported by the Compactor.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;The Compactor does not support retention on 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/storage/#index-storage&#34;&gt;legacy index types&lt;/a&gt;. Please use the 
    &lt;a href=&#34;/docs/loki/v3.7.x/operations/storage/table-manager/&#34;&gt;Table Manager&lt;/a&gt; when using legacy index types.
Both the Table manager and legacy index types are deprecated and may be removed in future major versions of Loki.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;h2 id=&#34;compactor&#34;&gt;Compactor&lt;/h2&gt;
&lt;p&gt;The Compactor is responsible for compaction of index files and applying log retention.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Run the Compactor as a singleton (a single instance).&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;The Compactor loops to apply compaction and retention at every &lt;code&gt;compactor.compaction-interval&lt;/code&gt;, or as soon as possible if running behind.
Both compaction and retention are idempotent, which means once the action has been performed, if the action is performed multiple times, it has no further effect on logs after the first time it is performed. If the Compactor restarts, it continues from where it left off.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Changes to your retention period are not retroactive, that is, they are not applied to logs that have already been ingested.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;The Compactor&amp;rsquo;s algorithm to apply retention is as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For each day or table (one table per day with 24h index period):
&lt;ul&gt;
&lt;li&gt;Compact multiple index files in the table into per-tenant index files. Result of compaction is a single index file per tenant per day.&lt;/li&gt;
&lt;li&gt;Traverse the per-tenant index. Use the tenant configuration to identify the chunks that need to be removed.&lt;/li&gt;
&lt;li&gt;Remove the references to the matching chunks from the index and add the chunk references to a marker file on disk.&lt;/li&gt;
&lt;li&gt;Upload the new modified index files.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Chunks are not deleted while applying the retention algorithm on the index. They are deleted asynchronously by a sweeper process
and this delay can be configured by setting &lt;code&gt;-compactor.retention-delete-delay&lt;/code&gt;. Marker files are used to keep track of the chunks pending for deletion.&lt;/p&gt;
&lt;p&gt;Chunks cannot be deleted immediately for the following reasons:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Index Gateway downloads a copy of the index files to serve queries and refreshes them at a regular interval.
Having a delay allows the index gateways to pull the modified index file which would not contain any reference to the chunks marked for deletion.
Without the delay, index files (that are stale) on the gateways could refer to already deleted chunks leading to query failures.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;It provides a short window of time in which to cancel chunk deletion in the case of a configuration mistake.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Marker files should be stored on a persistent disk to ensure that the chunks pending for deletion are processed even if the Compactor process restarts.


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Grafana Labs recommends running Compactor as a stateful deployment (StatefulSet when using Kubernetes) with a persistent storage for storing marker files.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;
&lt;/p&gt;
&lt;h3 id=&#34;retention-configuration&#34;&gt;Retention Configuration&lt;/h3&gt;
&lt;p&gt;This Compactor configuration example activates retention.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;compactor:
  working_directory: /data/retention
  compaction_interval: 10m
  retention_enabled: true
  retention_delete_delay: 2h
  retention_delete_worker_count: 150
  delete_request_store: gcs
schema_config:
  configs:
    - from: &amp;#34;2020-07-31&amp;#34;
      index:
        period: 24h
        prefix: index_
      object_store: gcs
      schema: v13
      store: tsdb
storage_config:
  tsdb_shipper:
    active_index_directory: /data/index
    cache_location: /data/index_cache
  gcs:
    bucket_name: loki&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Retention is only available if the index period is 24h. Single store TSDB and single store BoltDB require 24h index period.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;&lt;code&gt;retention_enabled&lt;/code&gt; should be set to true. Without this, the Compactor will only compact tables.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;delete_request_store&lt;/code&gt; should be set to configure the store for delete requests. This is required when retention is enabled.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;working_directory&lt;/code&gt; is the directory where marked chunks and temporary tables will be saved.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;compaction_interval&lt;/code&gt; dictates how often compaction and/or retention is applied. If the Compactor falls behind, compaction and/or retention occur as soon as possible.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;retention_delete_delay&lt;/code&gt; is the delay after which the Compactor will delete marked chunks.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;retention_delete_worker_count&lt;/code&gt; specifies the maximum quantity of goroutine workers instantiated to delete chunks.&lt;/p&gt;
&lt;h4 id=&#34;configuring-the-retention-period&#34;&gt;Configuring the retention period&lt;/h4&gt;
&lt;p&gt;Retention period is configured within the 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#limits_config&#34;&gt;&lt;code&gt;limits_config&lt;/code&gt;&lt;/a&gt; configuration section.&lt;/p&gt;
&lt;p&gt;There are two ways of setting retention policies:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;retention_period&lt;/code&gt; which is applied globally for all log streams.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;retention_stream&lt;/code&gt; which is only applied to log streams matching the selector.&lt;/li&gt;
&lt;/ul&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;The minimum retention period is 24h.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;This example configures global retention that applies to all tenants (unless overridden by configuring per-tenant overrides):&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;...
limits_config:
  retention_period: 744h
  retention_stream:
  - selector: &amp;#39;{namespace=&amp;#34;dev&amp;#34;}&amp;#39;
    priority: 1
    period: 24h
  per_tenant_override_config: /etc/overrides.yaml
...&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;You can only use label matchers in the &lt;code&gt;selector&lt;/code&gt; field of a &lt;code&gt;retention_stream&lt;/code&gt; definition. Arbitrary LogQL expressions are not supported.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;Per tenant retention can be defined by configuring 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#runtime-configuration-file&#34;&gt;runtime overrides&lt;/a&gt;. For example:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;overrides:
    &amp;#34;29&amp;#34;:
        retention_period: 168h
        retention_stream:
        - selector: &amp;#39;{namespace=&amp;#34;prod&amp;#34;}&amp;#39;
          priority: 2
          period: 336h
        - selector: &amp;#39;{container=&amp;#34;loki&amp;#34;}&amp;#39;
          priority: 1
          period: 72h
    &amp;#34;30&amp;#34;:
        retention_stream:
        - selector: &amp;#39;{container=&amp;#34;nginx&amp;#34;, level=&amp;#34;debug&amp;#34;}&amp;#39;
          priority: 1
          period: 24h&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Retention period for a given stream is decided based on the first match in this list:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;If multiple per-tenant &lt;code&gt;retention_stream&lt;/code&gt; selectors match the stream, retention period with the highest priority is picked.&lt;/li&gt;
&lt;li&gt;If multiple global &lt;code&gt;retention_stream&lt;/code&gt; selectors match the stream, retention period with the highest priority is picked. This value is not considered if per-tenant &lt;code&gt;retention_stream&lt;/code&gt; is set.&lt;/li&gt;
&lt;li&gt;If a per-tenant &lt;code&gt;retention_period&lt;/code&gt; is specified, it will be applied.&lt;/li&gt;
&lt;li&gt;The global &lt;code&gt;retention_period&lt;/code&gt; will be applied if none of the above match.&lt;/li&gt;
&lt;li&gt;If no global &lt;code&gt;retention_period&lt;/code&gt; is specified, the default value of &lt;code&gt;0s&lt;/code&gt; is used, which means logs are kept indefinitely.&lt;/li&gt;
&lt;/ol&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;The larger the priority value, the higher the priority.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;Stream matching uses the same syntax as Prometheus label matching:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;=&lt;/code&gt;: Select labels that are exactly equal to the provided string.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;!=&lt;/code&gt;: Select labels that are not equal to the provided string.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;=~&lt;/code&gt;: Select labels that regex-match the provided string.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;!~&lt;/code&gt;: Select labels that do not regex-match the provided string.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The example configurations defined above will result in the following retention periods:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For tenant &lt;code&gt;29&lt;/code&gt;:
&lt;ul&gt;
&lt;li&gt;Streams that have the namespace label &lt;code&gt;prod&lt;/code&gt; will have a retention period of &lt;code&gt;336h&lt;/code&gt; (2 weeks), even if the container label is &lt;code&gt;loki&lt;/code&gt;, since the priority of the &lt;code&gt;prod&lt;/code&gt; rule is higher.&lt;/li&gt;
&lt;li&gt;Streams that have the container label &lt;code&gt;loki&lt;/code&gt; but are not in the namespace &lt;code&gt;prod&lt;/code&gt; will have a &lt;code&gt;72h&lt;/code&gt; retention period.&lt;/li&gt;
&lt;li&gt;For the rest of the streams in this tenant, per-tenant override &lt;code&gt;retention_period&lt;/code&gt; value of &lt;code&gt;168h&lt;/code&gt; is applied.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;For tenant &lt;code&gt;30&lt;/code&gt;:
&lt;ul&gt;
&lt;li&gt;Streams that have the label &lt;code&gt;nginx&lt;/code&gt; and level &lt;code&gt;debug&lt;/code&gt; will have a retention period of &lt;code&gt;24h&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For the rest of the streams in this tenant the global retention period of &lt;code&gt;744h&lt;/code&gt;, since there is no override specified.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;All tenants except &lt;code&gt;29&lt;/code&gt; and &lt;code&gt;30&lt;/code&gt;:
&lt;ul&gt;
&lt;li&gt;Streams that have the namespace label &lt;code&gt;dev&lt;/code&gt; will have a retention period of &lt;code&gt;24h&lt;/code&gt; hours.&lt;/li&gt;
&lt;li&gt;Streams except those with the namespace label &lt;code&gt;dev&lt;/code&gt; will have the retention period of &lt;code&gt;744h&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;If you are a Grafana Cloud customer, you can use the &lt;a href=&#34;/docs/grafana-cloud/send-data/logs/config-self-serve/#configure-retention&#34;&gt;config self-serve API&lt;/a&gt; to configure your tenant retention.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;h2 id=&#34;table-manager-deprecated&#34;&gt;Table Manager (deprecated)&lt;/h2&gt;
&lt;p&gt;Retention through the 
    &lt;a href=&#34;/docs/loki/v3.7.x/operations/storage/table-manager/&#34;&gt;Table Manager&lt;/a&gt; is
achieved by relying on the object store TTL feature, and will work for both

    &lt;a href=&#34;/docs/loki/v3.7.x/operations/storage/boltdb-shipper/&#34;&gt;boltdb-shipper&lt;/a&gt; store and chunk/index stores.&lt;/p&gt;
&lt;p&gt;In order to enable the retention support, the Table Manager needs to be
configured to enable deletions and a retention period. Please refer to the

    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#table_manager&#34;&gt;&lt;code&gt;table_manager&lt;/code&gt;&lt;/a&gt;
section of the Loki configuration reference for all available options.
Alternatively, the &lt;code&gt;table-manager.retention-period&lt;/code&gt; and
&lt;code&gt;table-manager.retention-deletes-enabled&lt;/code&gt; command line flags can be used. The
provided retention period needs to be a duration represented as a string that
can be parsed using the Prometheus common model &lt;a href=&#34;https://pkg.go.dev/github.com/prometheus/common/model#ParseDuration&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;ParseDuration&lt;/a&gt;. Examples: &lt;code&gt;7d&lt;/code&gt;, &lt;code&gt;1w&lt;/code&gt;, &lt;code&gt;168h&lt;/code&gt;.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-warning&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Warning&lt;/p&gt;&lt;p&gt;The retention period must be a multiple of the index and chunks table
&lt;code&gt;period&lt;/code&gt;, configured in the 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#period_config&#34;&gt;&lt;code&gt;period_config&lt;/code&gt;&lt;/a&gt; block.
See the 
    &lt;a href=&#34;/docs/loki/v3.7.x/operations/storage/table-manager/#retention&#34;&gt;Table Manager&lt;/a&gt; documentation for
more information.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;



&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;To avoid querying of data beyond the retention period,&lt;code&gt;max_query_lookback&lt;/code&gt; config in 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#limits_config&#34;&gt;&lt;code&gt;limits_config&lt;/code&gt;&lt;/a&gt; must be set to a value less than or equal to what is set in &lt;code&gt;table_manager.retention_period&lt;/code&gt;.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;When using S3 or GCS, the bucket storing the chunks needs to have the expiry
policy set correctly. For more details check
&lt;a href=&#34;https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;S3&amp;rsquo;s documentation&lt;/a&gt;
or
&lt;a href=&#34;https://cloud.google.com/storage/docs/managing-lifecycles&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;GCS&amp;rsquo;s documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you must delete ingested logs, you can delete old chunks in your object store. Note,
however, that this only deletes the log content and keeps the label index
intact; you will still be able to see related labels but will be unable to
retrieve the deleted log content.&lt;/p&gt;
&lt;p&gt;For further details on the Table Manager internals, refer to the

    &lt;a href=&#34;/docs/loki/v3.7.x/operations/storage/table-manager/&#34;&gt;Table Manager&lt;/a&gt; documentation.&lt;/p&gt;
&lt;p&gt;Alternatively, if the BoltDB Shipper is configured for the index store, you can enable 
    &lt;a href=&#34;/docs/loki/v3.7.x/operations/storage/logs-deletion/&#34;&gt;Log entry deletion&lt;/a&gt; to delete log entries from a specific stream.&lt;/p&gt;
&lt;h2 id=&#34;example-configuration&#34;&gt;Example Configuration&lt;/h2&gt;
&lt;p&gt;Example configuration with GCS with a 28 day retention:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;schema_config:
  configs:
    - from: 2018-04-15
      store: tsdb
      object_store: gcs
      schema: v13
      index:
        prefix: loki_index_
        period: 24h

storage_config:
  tsdb_shipper:
    active_index_directory: /loki/index
    cache_location: /loki/index_cache
  gcs:
    bucket_name: GCS_BUCKET_NAME

limits_config:
  max_query_lookback: 672h # 28 days
  retention_period: 672h   # 28 days

compactor:
  working_directory: /data/retention
  delete_request_store: gcs
  retention_enabled: true&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
]]></content><description>&lt;h1 id="log-retention">Log retention&lt;/h1>
&lt;p>Retention in Grafana Loki is achieved through the &lt;a href="#compactor">Compactor&lt;/a>.
By default the &lt;code>compactor.retention-enabled&lt;/code> flag is not set, so the logs sent to Loki live forever.&lt;/p></description></item><item><title>Log entry deletion</title><link>https://grafana.com/docs/loki/v3.7.x/operations/storage/logs-deletion/</link><pubDate>Thu, 09 Apr 2026 02:28:18 +0000</pubDate><guid>https://grafana.com/docs/loki/v3.7.x/operations/storage/logs-deletion/</guid><content><![CDATA[&lt;h1 id=&#34;log-entry-deletion&#34;&gt;Log entry deletion&lt;/h1&gt;
&lt;p&gt;Grafana Loki supports the deletion of log entries from a specified stream.
Log entries that fall within a specified time window and match an optional line filter are those that will be deleted.&lt;/p&gt;
&lt;p&gt;Log entry deletion is supported &lt;em&gt;only&lt;/em&gt; when TSDB or BoltDB shipper is configured as the index store.&lt;/p&gt;
&lt;p&gt;The compactor component exposes REST 
    &lt;a href=&#34;/docs/loki/v3.7.x/reference/loki-http-api/#compactor&#34;&gt;endpoints&lt;/a&gt; that process delete requests.
Hitting the endpoint specifies the streams and the time window.
The deletion of the log entries takes place after a configurable cancellation time period expires.&lt;/p&gt;
&lt;p&gt;Log entry deletion relies on configuration of the custom logs retention workflow as defined for the &lt;a href=&#34;../retention/#compactor&#34;&gt;compactor&lt;/a&gt;. The compactor looks at unprocessed requests which are past their cancellation period to decide whether a chunk is to be deleted or not.&lt;/p&gt;
&lt;h2 id=&#34;configuration&#34;&gt;Configuration&lt;/h2&gt;
&lt;p&gt;Enable log entry deletion by setting &lt;code&gt;retention_enabled&lt;/code&gt; to true in the compactor&amp;rsquo;s configuration and setting and &lt;code&gt;deletion_mode&lt;/code&gt; to &lt;code&gt;filter-only&lt;/code&gt; or &lt;code&gt;filter-and-delete&lt;/code&gt; in the runtime config.
&lt;code&gt;delete_request_store&lt;/code&gt; also needs to be configured when retention is enabled to process delete requests, this determines the storage bucket that stores the delete requests.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-warning&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Warning&lt;/p&gt;&lt;p&gt;Be very careful when enabling retention. It is strongly recommended that you also enable versioning on your objects in object storage to allow you to recover from accidental misconfiguration of a retention setting. If you want to enable deletion but do not want to enforce retention, configure the &lt;code&gt;retention_period&lt;/code&gt; setting with a value of &lt;code&gt;0s&lt;/code&gt;.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;Because it is a runtime configuration, &lt;code&gt;deletion_mode&lt;/code&gt; can be set per-tenant, if desired.&lt;/p&gt;
&lt;p&gt;With &lt;code&gt;filter-only&lt;/code&gt;, log lines matching the query in the delete request are filtered out when querying Loki. They are not removed from storage.
With &lt;code&gt;filter-and-delete&lt;/code&gt;, log lines matching the query in the delete request are filtered out when querying Loki, and they are also removed from storage.&lt;/p&gt;
&lt;p&gt;A delete request may be canceled within a configurable cancellation period. Set the &lt;code&gt;delete_request_cancel_period&lt;/code&gt; in the compactor&amp;rsquo;s YAML configuration or on the command line when invoking Loki. Its default value is 24h.&lt;/p&gt;
&lt;p&gt;As long as the &lt;code&gt;compactor.retention_enabled&lt;/code&gt; setting is &lt;code&gt;true&lt;/code&gt;, the API endpoints will be available. Afterwards, access to the deletion API can be enabled per tenant via the &lt;code&gt;deletion_mode&lt;/code&gt; tenant override.&lt;/p&gt;
]]></content><description>&lt;h1 id="log-entry-deletion">Log entry deletion&lt;/h1>
&lt;p>Grafana Loki supports the deletion of log entries from a specified stream.
Log entries that fall within a specified time window and match an optional line filter are those that will be deleted.&lt;/p></description></item><item><title>Horizontal scaling of Compactor</title><link>https://grafana.com/docs/loki/v3.7.x/operations/storage/compactor-horizontal-scaling/</link><pubDate>Thu, 09 Apr 2026 02:28:18 +0000</pubDate><guid>https://grafana.com/docs/loki/v3.7.x/operations/storage/compactor-horizontal-scaling/</guid><content><![CDATA[&lt;h1 id=&#34;introduction&#34;&gt;Introduction&lt;/h1&gt;


&lt;div class=&#34;admonition admonition-caution&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Caution&lt;/p&gt;&lt;p&gt;Compactor horizontal scaling is an experimental feature. Use it with caution in your production environments.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;Grafana Loki saw a major improvement in its operational complexity and cost-effectiveness with the introduction of object-storage-based indexes.
This change also led to the addition of a singleton Compactor service, initially responsible only for index compaction.
However, as new features like &lt;a href=&#34;../retention&#34;&gt;Custom Retention&lt;/a&gt; and &lt;a href=&#34;../logs-deletion&#34;&gt;Deletion of Logs with line filters&lt;/a&gt; were introduced, the Compactor&amp;rsquo;s responsibilities grew.
With increasing scale and more demanding features, especially log deletion with line filters, the singleton Compactor began to show its scaling limits.&lt;/p&gt;
&lt;p&gt;Now, you can run the Loki Compactor in a horizontally scalable mode.
Since log deletion with line filters is the compactor&amp;rsquo;s most operationally intensive work, initially,
this horizontally scalable architecture will be leveraged to speed up and distribute the workload specifically for its log deletion with line filters functionality.&lt;/p&gt;
&lt;h1 id=&#34;how-it-works&#34;&gt;How it works&lt;/h1&gt;
&lt;p&gt;We have introduced two new modes to the Compactor for operating in horizontally scalable mode:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Main:
&lt;ul&gt;
&lt;li&gt;Runs all Compactor functions and distributes chunk processing for log line deletion with filters to the workers.&lt;/li&gt;
&lt;li&gt;Should be deployed as a singleton with access to a disk, like the current singleton deployment.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Worker:
&lt;ul&gt;
&lt;li&gt;Connects to the Main Compactor over gRPC to get and execute the jobs.&lt;/li&gt;
&lt;li&gt;Multiple replicas can be deployed to achieve higher job processing throughput.&lt;/li&gt;
&lt;li&gt;Only needs access to Object Storage for reading/writing chunks.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;implementation-details&#34;&gt;Implementation details&lt;/h2&gt;
&lt;p&gt;Although, Horizontally Scalable Compactor currently only supports distributing the work of log line deletion with filters,
in the future, we might add support for distributing more kind of work to the Workers.
We are going to use the current functionality to see in detail some of the implementation details.&lt;/p&gt;
&lt;h3 id=&#34;working-of-main-mode&#34;&gt;Working of Main mode&lt;/h3&gt;
&lt;p&gt;For Distribution of Chunk processing work from Main compactor, there are following core components involved:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Deletion Manifest Builder&lt;/strong&gt;: The manifest builder works on a batch of delete request to discover all the chunks covered by them based on their labels filters and time range.
Using the discovered chunks, it creates structured manifests and stores it to the Object Storage. Here is what comprises Deletion Manifest:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Segments&lt;/em&gt;: Groups of up to 100K chunks per segment. Also acts as partition for chunks by Loki Tenant/Table.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Manifest&lt;/em&gt;: Complete metadata about all segments and requests to process.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Job Builder&lt;/strong&gt;: The job builder converts manifests into discrete jobs and manages their lifecycle:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Job Creation&lt;/em&gt;: Breaks segments into jobs of up to 1K chunks each. Also includes line filters to apply on the chunks to remove the log lines requested for deletion.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Progress Tracking&lt;/em&gt;: Monitors job completion and stops processing of manifest when any of the jobs fail.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Job Queueing&lt;/em&gt;: Sends jobs to the Job Queue for processing.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Storage Updates&lt;/em&gt;: Collects storage updates suggested by Workers and stores them to Object Storage for each segment.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Post-processing Cleanup&lt;/em&gt;: Marks requests as processed and removes all the files from the storage.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Job Queue&lt;/strong&gt;: The job queue manages job distribution and Worker-Job Builder communication.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Job Distribution&lt;/em&gt;: Sends jobs to available workers via gRPC streaming.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Retry Logic&lt;/em&gt;: Automatically retries failed or timed-out jobs until allowed number of attempts.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Job Response&lt;/em&gt;: Sends the job processing response to Job Builder. Also notifies about failed jobs after running out of retries.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;working-of-worker-mode&#34;&gt;Working of Worker mode&lt;/h3&gt;
&lt;p&gt;Workers connect to the Main Compactor via gRPC to fetch and execute jobs, returning results on the same gRPC stream.
It uses &lt;code&gt;compactor_grpc_address&lt;/code&gt; under the 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#common&#34;&gt;common config&lt;/a&gt; to connect to the Main Compactor.&lt;/p&gt;
&lt;p&gt;When handling Deletion jobs, a worker downloads the listed chunks, applies the specified filters to rebuild them without the filtered lines, and then provides a comprehensive storage update as job execution response.
The storage update details which chunks to delete from Object Storage and which newly created chunks to index.&lt;/p&gt;
&lt;h3 id=&#34;sequence-diagram&#34;&gt;Sequence diagram&lt;/h3&gt;
&lt;p&gt;The sequence diagram depicts Deletion Manifest Builder, Job Builder and Job Queue as separate entities than the Main Compactor to show how the components are interlinked.
In reality, all three components run within the Main Compactor.&lt;/p&gt;
&lt;figure
    class=&#34;figure-wrapper figure-wrapper__lightbox w-100p &#34;
    style=&#34;max-width: px;&#34;
    itemprop=&#34;associatedMedia&#34;
    itemscope=&#34;&#34;
    itemtype=&#34;http://schema.org/ImageObject&#34;
  &gt;&lt;a
        class=&#34;lightbox-link&#34;
        href=&#34;../compactor-HS-seq-diagram.png&#34;
        itemprop=&#34;contentUrl&#34;
      &gt;&lt;div class=&#34;img-wrapper w-100p h-auto&#34;&gt;&lt;img
          class=&#34;lazyload &#34;
          data-src=&#34;../compactor-HS-seq-diagram.png&#34;alt=&#34;Compactor Horizontal Scaling Sequence Diagram&#34;/&gt;
        &lt;noscript&gt;
          &lt;img
            src=&#34;../compactor-HS-seq-diagram.png&#34;
            alt=&#34;Compactor Horizontal Scaling Sequence Diagram&#34;/&gt;
        &lt;/noscript&gt;&lt;/div&gt;&lt;/a&gt;&lt;/figure&gt;
&lt;h2 id=&#34;configuration&#34;&gt;Configuration&lt;/h2&gt;
&lt;p&gt;The &lt;code&gt;horizontal_scaling_mode&lt;/code&gt; configuration option in the compactor controls the enablement of horizontally scalable compactor deployment.
It supports setting the following modes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;disabled&lt;/code&gt;&lt;/strong&gt; (default): Keeps the horizontal scaling mode disabled. Locally runs all the functions of the compactor.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;main&lt;/code&gt;&lt;/strong&gt;: Runs all functions of the compactor. Distributes work to workers where possible.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;worker&lt;/code&gt;&lt;/strong&gt;: Runs the compactor in worker mode, only working on jobs built by the main compactor.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;config-for-main-mode&#34;&gt;Config for Main mode&lt;/h3&gt;
&lt;p&gt;To run Compactor in Main mode, the Horizontal Scaling Mode needs to be set to &amp;ldquo;main&amp;rdquo;:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;compactor:
  # CLI flag: -compactor.horizontal-scaling-mode=&amp;#34;main&amp;#34;
  horizontal_scaling_mode: &amp;#34;main&amp;#34;&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Additionally, there are the below config options available for the Main compactor to configure some aspects of Job Building:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;compactor:
  jobs_config:
    deletion:
      # Object storage path prefix for storing deletion manifests.
      # CLI flag: -compactor.jobs.deletion.deletion-manifest-store-prefix
      [deletion_manifest_store_prefix: &amp;lt;string&amp;gt; | default = &amp;#34;__deletion_manifest__/&amp;#34;]
      
      # Maximum time to wait for a job before considering it failed and retrying.
      # CLI flag: -compactor.jobs.deletion.timeout
      [timeout: &amp;lt;duration&amp;gt; | default = 15m]
      
      # Maximum number of times to retry a failed or timed out job.
      # CLI flag: -compactor.jobs.deletion.max-retries
      [max_retries: &amp;lt;int&amp;gt; | default = 3]&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h3 id=&#34;config-for-worker-mode&#34;&gt;Config for Worker mode&lt;/h3&gt;
&lt;p&gt;To run Compactor in Worker mode, the Horizontal Scaling Mode needs to be set to &amp;ldquo;worker&amp;rdquo; and Main compactor&amp;rsquo;s GRPC address needs to be set:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;common:
  compactor_grpc_address: &amp;lt;HOST&amp;gt;:&amp;lt;GRPC_PORT&amp;gt;
compactor:
  # CLI flag: -compactor.horizontal-scaling-mode=&amp;#34;worker&amp;#34;
  horizontal_scaling_mode: &amp;#34;worker&amp;#34;&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Additionally, there are the below config options available for the Worker to configure some aspects of Job execution:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;compactor:
  jobs_config:
    deletion:
      # Maximum number of chunks to process concurrently in each worker.
      # CLI flag: -compactor.jobs.deletion.chunk-processing-concurrency
      [chunk_processing_concurrency: &amp;lt;int&amp;gt; | default = 3]

worker_config:
  # Number of sub-workers to run for concurrent processing of jobs. Setting it to 0
  # will run a subworker per available CPU core.
  # CLI flag: -compactor.worker.num-sub-workers
  [num_sub_workers: &amp;lt;int&amp;gt; | default = 0]&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
]]></content><description>&lt;h1 id="introduction">Introduction&lt;/h1>
&lt;div class="admonition admonition-caution">&lt;blockquote>&lt;p class="title text-uppercase">Caution&lt;/p>&lt;p>Compactor horizontal scaling is an experimental feature. Use it with caution in your production environments.&lt;/p>&lt;/blockquote>&lt;/div>
&lt;p>Grafana Loki saw a major improvement in its operational complexity and cost-effectiveness with the introduction of object-storage-based indexes.
This change also led to the addition of a singleton Compactor service, initially responsible only for index compaction.
However, as new features like &lt;a href="../retention">Custom Retention&lt;/a> and &lt;a href="../logs-deletion">Deletion of Logs with line filters&lt;/a> were introduced, the Compactor&amp;rsquo;s responsibilities grew.
With increasing scale and more demanding features, especially log deletion with line filters, the singleton Compactor began to show its scaling limits.&lt;/p></description></item><item><title>Legacy storage</title><link>https://grafana.com/docs/loki/v3.7.x/operations/storage/legacy-storage/</link><pubDate>Thu, 09 Apr 2026 02:28:18 +0000</pubDate><guid>https://grafana.com/docs/loki/v3.7.x/operations/storage/legacy-storage/</guid><content><![CDATA[&lt;h1 id=&#34;legacy-storage&#34;&gt;Legacy storage&lt;/h1&gt;


&lt;div class=&#34;admonition admonition-warning&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Warning&lt;/p&gt;&lt;p&gt;The concepts described on this page are considered legacy and pre-date the single store storage introduced in Loki 2.0.
The usage of legacy storage for new installations is highly discouraged and documentation is meant for informational
purposes in case of upgrade to a single store.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;The &lt;strong&gt;chunk store&lt;/strong&gt; is the Loki long-term data store, designed to support
interactive querying and sustained writing without the need for background
maintenance tasks. It consists of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;An index for the chunks. This index can be backed by:
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://aws.amazon.com/dynamodb&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Amazon DynamoDB&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://cloud.google.com/bigtable&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Google Bigtable&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://cassandra.apache.org&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Apache Cassandra&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;A key-value (KV) store for the chunk data itself, which can be:
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://aws.amazon.com/dynamodb&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Amazon DynamoDB&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://cloud.google.com/bigtable&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Google Bigtable&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://cassandra.apache.org&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Apache Cassandra&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://aws.amazon.com/s3&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Amazon S3&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://cloud.google.com/storage/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Google Cloud Storage&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Unlike the other core components of Loki, the chunk store is not a separate
service, job, or process, but rather a library embedded in the two services
that need to access Loki data: the &lt;a href=&#34;../../../get-started/components/#ingester&#34;&gt;ingester&lt;/a&gt; and &lt;a href=&#34;../../../get-started/components/#querier&#34;&gt;querier&lt;/a&gt;.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;The chunk store relies on a unified interface to the
&amp;ldquo;&lt;a href=&#34;https://en.wikipedia.org/wiki/NoSQL&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;NoSQL&lt;/a&gt;&amp;rdquo; stores (DynamoDB, Bigtable, and
Cassandra) that can be used to back the chunk store index. This interface
assumes that the index is a collection of entries keyed by:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;hash key&lt;/strong&gt;. This is required for &lt;em&gt;all&lt;/em&gt; reads and writes.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;range key&lt;/strong&gt;. This is required for writes and can be omitted for reads,
which can be queried by prefix or range.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The interface works somewhat differently across the supported databases:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;DynamoDB supports range and hash keys natively. Index entries are thus
modelled directly as DynamoDB entries, with the hash key as the distribution
key and the range as the DynamoDB range key.&lt;/li&gt;
&lt;li&gt;For Bigtable and Cassandra, index entries are modelled as individual column
values. The hash key becomes the row key and the range key becomes the column
key.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A set of schemas are used to map the matchers and label sets used on reads and
writes to the chunk store into appropriate operations on the index. Schemas have
been added as Loki has evolved, mainly in an attempt to better load balance
writes and improve query performance.&lt;/p&gt;
]]></content><description>&lt;h1 id="legacy-storage">Legacy storage&lt;/h1>
&lt;div class="admonition admonition-warning">&lt;blockquote>&lt;p class="title text-uppercase">Warning&lt;/p>&lt;p>The concepts described on this page are considered legacy and pre-date the single store storage introduced in Loki 2.0.
The usage of legacy storage for new installations is highly discouraged and documentation is meant for informational
purposes in case of upgrade to a single store.&lt;/p></description></item><item><title>Table manager</title><link>https://grafana.com/docs/loki/v3.7.x/operations/storage/table-manager/</link><pubDate>Thu, 09 Apr 2026 02:28:18 +0000</pubDate><guid>https://grafana.com/docs/loki/v3.7.x/operations/storage/table-manager/</guid><content><![CDATA[&lt;h1 id=&#34;table-manager&#34;&gt;Table manager&lt;/h1&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Table manager is only needed if you are using a multi-store 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/storage/&#34;&gt;backend&lt;/a&gt;. If you are using either TSDB (recommended), or BoltDB (deprecated) you do not need the Table Manager.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;Grafana Loki supports storing indexes and chunks in table-based data storages. When
such a storage type is used, multiple tables are created over the time: each
table - also called periodic table - contains the data for a specific time
range.&lt;/p&gt;
&lt;p&gt;This design brings two main benefits:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Schema config changes&lt;/strong&gt;: each table is bounded to a schema config and
version, so that changes can be introduced over the time and multiple schema
configs can coexist&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Retention&lt;/strong&gt;: the retention is implemented deleting an entire table, which
allows to have fast delete operations&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The &lt;strong&gt;Table Manager&lt;/strong&gt; is a Loki component which takes care of creating a
periodic table before its time period begins, and deleting it once its data
time range exceeds the retention period.&lt;/p&gt;
&lt;p&gt;The Table Manager supports the following backends:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Index store&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;../boltdb-shipper/&#34;&gt;Single Store (boltdb-shipper)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://aws.amazon.com/dynamodb&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Amazon DynamoDB&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://cloud.google.com/bigtable&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Google Bigtable&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://cassandra.apache.org&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Apache Cassandra&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/boltdb/bolt&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;BoltDB&lt;/a&gt; (primarily used for local environments)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Chunk store&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Filesystem (primarily used for local environments)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Loki does support the following backends for both index and chunk storage, but they are deprecated and will be removed in a future release:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://aws.amazon.com/dynamodb&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Amazon DynamoDB&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://cloud.google.com/bigtable&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Google Bigtable&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://cassandra.apache.org&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Apache Cassandra&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The object storages - like Amazon S3 and Google Cloud Storage - supported by Loki
to store chunks, are not managed by the Table Manager, and a custom bucket policy
should be set to delete old data.&lt;/p&gt;
&lt;p&gt;For detailed information on configuring the Table Manager, refer to the

    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#table_manager&#34;&gt;&lt;code&gt;table_manager&lt;/code&gt;&lt;/a&gt;
section in the Loki configuration document.&lt;/p&gt;
&lt;h2 id=&#34;tables-and-schema-config&#34;&gt;Tables and schema config&lt;/h2&gt;
&lt;p&gt;A periodic table stores the index or chunk data relative to a specific period
of time. The duration of the time range of the data stored in a single table and
its storage type is configured in the

    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#schema_config&#34;&gt;&lt;code&gt;schema_config&lt;/code&gt;&lt;/a&gt; configuration
block.&lt;/p&gt;
&lt;p&gt;The 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#schema_config&#34;&gt;&lt;code&gt;schema_config&lt;/code&gt;&lt;/a&gt; can contain
one or more &lt;code&gt;configs&lt;/code&gt;. Each config, defines the storage used between the day
set in &lt;code&gt;from&lt;/code&gt; (in the format &lt;code&gt;yyyy-mm-dd&lt;/code&gt;) and the next config, or &amp;ldquo;now&amp;rdquo;
in the case of the last schema config entry.&lt;/p&gt;
&lt;p&gt;This allows to have multiple non-overlapping schema configs over the time, in
order to perform schema version upgrades or change storage settings (including
changing the storage type).&lt;/p&gt;
&lt;figure
    class=&#34;figure-wrapper figure-wrapper__lightbox w-100p &#34;
    style=&#34;max-width: px;&#34;
    itemprop=&#34;associatedMedia&#34;
    itemscope=&#34;&#34;
    itemtype=&#34;http://schema.org/ImageObject&#34;
  &gt;&lt;a
        class=&#34;lightbox-link&#34;
        href=&#34;./table-manager-periodic-tables.png&#34;
        itemprop=&#34;contentUrl&#34;
      &gt;&lt;div class=&#34;img-wrapper w-100p h-auto&#34;&gt;&lt;img
          class=&#34;lazyload &#34;
          data-src=&#34;./table-manager-periodic-tables.png&#34;alt=&#34;periodic tables&#34;/&gt;
        &lt;noscript&gt;
          &lt;img
            src=&#34;./table-manager-periodic-tables.png&#34;
            alt=&#34;periodic tables&#34;/&gt;
        &lt;/noscript&gt;&lt;/div&gt;&lt;/a&gt;&lt;/figure&gt;
&lt;p&gt;The write path hits the table where the log entry timestamp falls into (usually
the last table, except short periods close to the end of a table and the
beginning of the next one), while the read path hits the tables containing data
for the query time range.&lt;/p&gt;
&lt;h3 id=&#34;schema-config-example&#34;&gt;Schema config example&lt;/h3&gt;
&lt;p&gt;For example, the following &lt;code&gt;schema_config&lt;/code&gt; defines two configurations: the first
one using the schema &lt;code&gt;v10&lt;/code&gt; and the current one using the &lt;code&gt;v11&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The first config stores data between &lt;code&gt;2019-01-01&lt;/code&gt; and &lt;code&gt;2019-04-14&lt;/code&gt; (included),
then a new config has been added - to upgrade the schema version to &lt;code&gt;v11&lt;/code&gt; -
storing data using the &lt;code&gt;v11&lt;/code&gt; schema from &lt;code&gt;2019-04-15&lt;/code&gt; on.&lt;/p&gt;
&lt;p&gt;For each config, multiple tables are created, each one storing data for
&lt;code&gt;period&lt;/code&gt; time (168 hours = 7 days).&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;schema_config:
  configs:
    - from:   2019-01-01
      store:  dynamo
      schema: v10
      index:
        prefix: loki_
        period: 168h
    - from:   2019-04-15
      store:  dynamo
      schema: v11
      index:
        prefix: loki_
        period: 168h&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h3 id=&#34;table-creation&#34;&gt;Table creation&lt;/h3&gt;
&lt;p&gt;The Table Manager creates new tables slightly ahead of their start period, in
order to make sure that the new table is ready once the current table end
period is reached.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;creation_grace_period&lt;/code&gt; property - in the

    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#table_manager&#34;&gt;&lt;code&gt;table_manager&lt;/code&gt;&lt;/a&gt;
configuration block - defines how long before a table should be created.&lt;/p&gt;
&lt;h2 id=&#34;retention&#34;&gt;Retention&lt;/h2&gt;
&lt;p&gt;The retention - managed by the Table Manager - is disabled by default, due to
its destructive nature. You can enable the data retention explicitly enabling
it in the configuration and setting a &lt;code&gt;retention_period&lt;/code&gt; greater than zero:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;table_manager:
  retention_deletes_enabled: true
  retention_period: 336h&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The Table Manager implements the retention deleting the entire tables whose
data exceeded the &lt;code&gt;retention_period&lt;/code&gt;. This design allows to have fast delete
operations, at the cost of having a retention granularity controlled by the
table&amp;rsquo;s &lt;code&gt;period&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Given each table contains data for &lt;code&gt;period&lt;/code&gt; of time and that the entire table
is deleted, the Table Manager keeps the last tables alive using this formula:&lt;/p&gt;

&lt;div class=&#34;code-snippet code-snippet__mini&#34;&gt;&lt;div class=&#34;lang-toolbar__mini&#34;&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet code-snippet__border&#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-none&#34;&gt;number_of_tables_to_keep = floor(retention_period / table_period) &amp;#43; 1&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;figure
    class=&#34;figure-wrapper figure-wrapper__lightbox w-100p &#34;
    style=&#34;max-width: px;&#34;
    itemprop=&#34;associatedMedia&#34;
    itemscope=&#34;&#34;
    itemtype=&#34;http://schema.org/ImageObject&#34;
  &gt;&lt;a
        class=&#34;lightbox-link&#34;
        href=&#34;./table-manager-retention.png&#34;
        itemprop=&#34;contentUrl&#34;
      &gt;&lt;div class=&#34;img-wrapper w-100p h-auto&#34;&gt;&lt;img
          class=&#34;lazyload &#34;
          data-src=&#34;./table-manager-retention.png&#34;alt=&#34;retention&#34;/&gt;
        &lt;noscript&gt;
          &lt;img
            src=&#34;./table-manager-retention.png&#34;
            alt=&#34;retention&#34;/&gt;
        &lt;/noscript&gt;&lt;/div&gt;&lt;/a&gt;&lt;/figure&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;It&amp;rsquo;s important to note that - due to the internal implementation - the table
&lt;code&gt;period&lt;/code&gt; and &lt;code&gt;retention_period&lt;/code&gt; &lt;strong&gt;must&lt;/strong&gt; be multiples of &lt;code&gt;24h&lt;/code&gt; in order to get
the expected behavior.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;For detailed information on configuring the retention, refer to the
&lt;a href=&#34;../retention/&#34;&gt;Loki Storage Retention&lt;/a&gt;
documentation.&lt;/p&gt;
&lt;h2 id=&#34;active--inactive-tables&#34;&gt;Active / inactive tables&lt;/h2&gt;
&lt;p&gt;A table can be active or inactive.&lt;/p&gt;
&lt;p&gt;A table is considered &lt;strong&gt;active&lt;/strong&gt; if the current time is within the range:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Table start period - 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#table_manager&#34;&gt;&lt;code&gt;creation_grace_period&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Table end period &#43; max chunk age (hardcoded to &lt;code&gt;12h&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;figure
    class=&#34;figure-wrapper figure-wrapper__lightbox w-100p &#34;
    style=&#34;max-width: px;&#34;
    itemprop=&#34;associatedMedia&#34;
    itemscope=&#34;&#34;
    itemtype=&#34;http://schema.org/ImageObject&#34;
  &gt;&lt;a
        class=&#34;lightbox-link&#34;
        href=&#34;./table-manager-active-vs-inactive-tables.png&#34;
        itemprop=&#34;contentUrl&#34;
      &gt;&lt;div class=&#34;img-wrapper w-100p h-auto&#34;&gt;&lt;img
          class=&#34;lazyload &#34;
          data-src=&#34;./table-manager-active-vs-inactive-tables.png&#34;alt=&#34;active_vs_inactive_tables&#34;/&gt;
        &lt;noscript&gt;
          &lt;img
            src=&#34;./table-manager-active-vs-inactive-tables.png&#34;
            alt=&#34;active_vs_inactive_tables&#34;/&gt;
        &lt;/noscript&gt;&lt;/div&gt;&lt;/a&gt;&lt;/figure&gt;
&lt;p&gt;Currently, the difference between an active and inactive table &lt;strong&gt;only applies
to the DynamoDB storage&lt;/strong&gt; settings: capacity mode (on-demand or provisioned),
read/write capacity units and autoscaling.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;DynamoDB&lt;/th&gt;
              &lt;th&gt;Active table&lt;/th&gt;
              &lt;th&gt;Inactive table&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;Capacity mode&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;enable_ondemand_throughput_mode&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;enable_inactive_throughput_on_demand_mode&lt;/code&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Read capacity unit&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;provisioned_read_throughput&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;inactive_read_throughput&lt;/code&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Write capacity unit&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;provisioned_write_throughput&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;inactive_write_throughput&lt;/code&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Autoscaling&lt;/td&gt;
              &lt;td&gt;Enabled (if configured)&lt;/td&gt;
              &lt;td&gt;Always disabled&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;h2 id=&#34;dynamodb-provisioning&#34;&gt;DynamoDB Provisioning&lt;/h2&gt;
&lt;p&gt;When configuring DynamoDB with the Table Manager, the default &lt;a href=&#34;https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;on-demand
provisioning&lt;/a&gt;
capacity units for reads are set to 300 and writes are set to 3000. The
defaults can be overwritten:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;table_manager:
  index_tables_provisioning:
    provisioned_write_throughput: 10
    provisioned_read_throughput: 10
  chunk_tables_provisioning:
    provisioned_write_throughput: 10
    provisioned_read_throughput: 10&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If Table Manager is not automatically managing DynamoDB, old data cannot easily
be erased and the index will grow indefinitely. Manual configurations should
ensure that the primary index key is set to &lt;code&gt;h&lt;/code&gt; (string) and the sort key is set
to &lt;code&gt;r&lt;/code&gt; (binary). The &amp;ldquo;period&amp;rdquo; attribute in the configuration YAML should be set
to &lt;code&gt;0&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;table-manager-deployment-mode&#34;&gt;Table Manager deployment mode&lt;/h2&gt;
&lt;p&gt;The Table Manager can be executed in two ways:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Implicitly executed when Loki runs in monolithic mode (single process)&lt;/li&gt;
&lt;li&gt;Explicitly executed when Loki runs in microservices mode&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;monolithic-mode&#34;&gt;Monolithic mode&lt;/h3&gt;
&lt;p&gt;When Loki runs in &lt;a href=&#34;../../../get-started/deployment-modes/&#34;&gt;monolithic mode&lt;/a&gt;,
the Table Manager is also started as component of the entire stack.&lt;/p&gt;
&lt;h3 id=&#34;microservices-mode&#34;&gt;Microservices mode&lt;/h3&gt;
&lt;p&gt;When Loki runs in &lt;a href=&#34;../../../get-started/deployment-modes/&#34;&gt;microservices mode&lt;/a&gt;,
the Table Manager should be started as separate service named &lt;code&gt;table-manager&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;You can check out a production grade deployment example at
&lt;a href=&#34;https://github.com/grafana/loki/blob/main/production/ksonnet/loki/table-manager.libsonnet&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;&lt;code&gt;table-manager.libsonnet&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
]]></content><description>&lt;h1 id="table-manager">Table manager&lt;/h1>
&lt;div class="admonition admonition-note">&lt;blockquote>&lt;p class="title text-uppercase">Note&lt;/p>&lt;p>Table manager is only needed if you are using a multi-store
&lt;a href="/docs/loki/v3.7.x/configure/storage/">backend&lt;/a>. If you are using either TSDB (recommended), or BoltDB (deprecated) you do not need the Table Manager.&lt;/p></description></item></channel></rss>