<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Get started with Grafana Loki on Grafana Labs</title><link>https://grafana.com/docs/loki/v3.7.x/get-started/</link><description>Recent content in Get started with Grafana Loki on Grafana Labs</description><generator>Hugo -- gohugo.io</generator><language>en</language><atom:link href="/docs/loki/v3.7.x/get-started/index.xml" rel="self" type="application/rss+xml"/><item><title>Loki overview</title><link>https://grafana.com/docs/loki/v3.7.x/get-started/overview/</link><pubDate>Thu, 09 Apr 2026 02:28:18 +0000</pubDate><guid>https://grafana.com/docs/loki/v3.7.x/get-started/overview/</guid><content><![CDATA[&lt;h1 id=&#34;loki-overview&#34;&gt;Loki overview&lt;/h1&gt;
&lt;p&gt;Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by &lt;a href=&#34;https://prometheus.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Prometheus&lt;/a&gt;. Loki differs from Prometheus by focusing on logs instead of metrics, and collecting logs via push, instead of pull.&lt;/p&gt;
&lt;p&gt;Loki is designed to be very cost effective and highly scalable. Unlike other logging systems, Loki does not index the contents of the logs, but only indexes metadata about your logs as a set of labels for each log stream.&lt;/p&gt;
&lt;p&gt;A log stream is a set of logs which share the same labels. Labels help Loki to find a log stream within your data store, so having a quality set of labels is key to efficient query execution.&lt;/p&gt;
&lt;p&gt;Log data is then compressed and stored in chunks in an object store such as Amazon Simple Storage Service (S3) or Google Cloud Storage (GCS), or even, for development or proof of concept, on the filesystem. A small index and highly compressed chunks simplify the operation and significantly lower the cost of Loki.&lt;/p&gt;
&lt;figure
    class=&#34;figure-wrapper figure-wrapper__lightbox w-100p &#34;
    style=&#34;max-width: px;&#34;
    itemprop=&#34;associatedMedia&#34;
    itemscope=&#34;&#34;
    itemtype=&#34;http://schema.org/ImageObject&#34;
  &gt;&lt;a
        class=&#34;lightbox-link captioned&#34;
        href=&#34;../loki-overview-2.png&#34;
        itemprop=&#34;contentUrl&#34;
      &gt;&lt;div class=&#34;img-wrapper w-100p h-auto&#34;&gt;&lt;img
          class=&#34;lazyload mb-0&#34;
          data-src=&#34;../loki-overview-2.png&#34;alt=&#34;**Loki logging stack**&#34;title=&#34;**Loki logging stack**&#34;/&gt;
        &lt;noscript&gt;
          &lt;img
            src=&#34;../loki-overview-2.png&#34;
            alt=&#34;**Loki logging stack**&#34;title=&#34;**Loki logging stack**&#34;/&gt;
        &lt;/noscript&gt;&lt;/div&gt;&lt;figcaption class=&#34;w-100p caption text-gray-13  &#34;&gt;&lt;strong&gt;Loki logging stack&lt;/strong&gt;&lt;/figcaption&gt;&lt;/a&gt;&lt;/figure&gt;
&lt;p&gt;A typical Loki-based logging stack consists of 3 components:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Agent&lt;/strong&gt; - An agent or client, for example &lt;a href=&#34;/docs/alloy/latest/&#34;&gt;Grafana Alloy&lt;/a&gt;. The agent scrapes logs, turns the logs into streams by adding labels, and pushes the streams to Loki through an HTTP API.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Loki&lt;/strong&gt; - The main server, responsible for ingesting and storing logs and processing queries. It can be deployed in three different configurations, for more information see &lt;a href=&#34;../deployment-modes/&#34;&gt;deployment modes&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&#34;https://github.com/grafana/grafana&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Grafana&lt;/a&gt;&lt;/strong&gt; for querying and displaying log data. You can also query logs from the command line, using &lt;a href=&#34;../../query/logcli/&#34;&gt;LogCLI&lt;/a&gt; or using the Loki API directly.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;loki-features&#34;&gt;Loki features&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt; - Loki is designed for scalability, and can scale from as small as running on a Raspberry Pi to ingesting petabytes a day.
In its most common deployment, “simple scalable mode”, Loki decouples requests into separate read and write paths, so that you can independently scale them, which leads to flexible large-scale installations that can quickly adapt to meet your workload at any given time.
If needed, each of the Loki components can also be run as microservices designed to run natively within Kubernetes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Multi-tenancy&lt;/strong&gt; - Loki allows multiple tenants to share a single Loki instance. With multi-tenancy, the data and requests of each tenant is completely isolated from the others.
Multi-tenancy is &lt;a href=&#34;../../operations/multi-tenancy/&#34;&gt;configured&lt;/a&gt; by assigning a tenant ID in the agent.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Third-party integrations&lt;/strong&gt; - Several third-party agents (clients) have support for Loki, via plugins. This lets you keep your existing observability setup while also shipping logs to Loki.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Efficient storage&lt;/strong&gt; - Loki stores log data in highly compressed chunks.
Similarly, the Loki index, because it indexes only the set of labels, is significantly smaller than other log aggregation tools.
By leveraging object storage as the only data storage mechanism, Loki inherits the reliability and stability of the underlying object store. It also capitalizes on both the cost efficiency and operational simplicity of object storage over other storage mechanisms like locally attached solid state drives (SSD) and hard disk drives (HDD).&lt;br /&gt;
The compressed chunks, smaller index, and use of low-cost object storage, make Loki less expensive to operate.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;LogQL, the Loki query language&lt;/strong&gt; - &lt;a href=&#34;../../query/&#34;&gt;LogQL&lt;/a&gt; is the query language for Loki.  Users who are already familiar with the Prometheus query language, &lt;a href=&#34;https://prometheus.io/docs/prometheus/latest/querying/basics/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;PromQL&lt;/a&gt;, will find LogQL familiar and flexible for generating queries against the logs.
The language also facilitates the generation of metrics from log data,
a powerful feature that goes well beyond log aggregation.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Alerting&lt;/strong&gt; - Loki includes a component called the &lt;a href=&#34;../../alert/&#34;&gt;ruler&lt;/a&gt;, which can continually evaluate queries against your logs, and perform an action based on the result. This allows you to monitor your logs for anomalies or events. Loki integrates with &lt;a href=&#34;https://prometheus.io/docs/alerting/latest/alertmanager/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Prometheus Alertmanager&lt;/a&gt;, or the &lt;a href=&#34;/docs/grafana/latest/alerting/&#34;&gt;alert manager&lt;/a&gt; within Grafana.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Grafana integration&lt;/strong&gt; - Loki integrates with Grafana, Mimir, and Tempo, providing a complete observability stack, and seamless correlation between logs, metrics and traces.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
]]></content><description>&lt;h1 id="loki-overview">Loki overview&lt;/h1>
&lt;p>Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by &lt;a href="https://prometheus.io/" target="_blank" rel="noopener noreferrer">Prometheus&lt;/a>. Loki differs from Prometheus by focusing on logs instead of metrics, and collecting logs via push, instead of pull.&lt;/p></description></item><item><title>Quick Start Loki</title><link>https://grafana.com/docs/loki/v3.7.x/get-started/quick-start/</link><pubDate>Thu, 09 Apr 2026 02:28:18 +0000</pubDate><guid>https://grafana.com/docs/loki/v3.7.x/get-started/quick-start/</guid><content><![CDATA[&lt;h1 id=&#34;quick-start-loki&#34;&gt;Quick Start Loki&lt;/h1&gt;
&lt;p&gt;This section provides a collection of tutorials to help you get started with Loki. Our recommenation is to do the tutorials in this order:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
    &lt;a href=&#34;/docs/loki/v3.7.x/get-started/quick-start/quick-start/&#34;&gt;Quick Start&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
    &lt;a href=&#34;/docs/loki/v3.7.x/get-started/quick-start/tutorial/&#34;&gt;Loki tutorial&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
]]></content><description>&lt;h1 id="quick-start-loki">Quick Start Loki&lt;/h1>
&lt;p>This section provides a collection of tutorials to help you get started with Loki. Our recommenation is to do the tutorials in this order:&lt;/p></description></item><item><title>Loki architecture</title><link>https://grafana.com/docs/loki/v3.7.x/get-started/architecture/</link><pubDate>Thu, 09 Apr 2026 02:28:18 +0000</pubDate><guid>https://grafana.com/docs/loki/v3.7.x/get-started/architecture/</guid><content><![CDATA[&lt;h1 id=&#34;loki-architecture&#34;&gt;Loki architecture&lt;/h1&gt;
&lt;p&gt;Grafana Loki has a microservices-based architecture and is designed to run as a horizontally scalable, distributed system.
The system has multiple components that can run separately and in parallel. The
Grafana Loki design compiles the code for all components into a single binary or Docker image.
The &lt;code&gt;-target&lt;/code&gt; command-line flag controls which component(s) that binary will behave as.&lt;/p&gt;
&lt;p&gt;To get started easily, run Grafana Loki in &amp;ldquo;single binary&amp;rdquo; mode with all components running simultaneously in one process, or in &amp;ldquo;simple scalable deployment&amp;rdquo; mode, which groups components into read, write, and backend parts.&lt;/p&gt;
&lt;p&gt;Grafana Loki is designed to easily redeploy a cluster under a different mode as your needs change, with no configuration changes or minimal configuration changes.&lt;/p&gt;
&lt;p&gt;For more information, refer to &lt;a href=&#34;../deployment-modes/&#34;&gt;Deployment modes&lt;/a&gt; and &lt;a href=&#34;../components/&#34;&gt;Components&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;../loki_architecture_components.svg&#34;
  alt=&#34;Loki components&#34;/&gt;&lt;/p&gt;
&lt;h2 id=&#34;storage&#34;&gt;Storage&lt;/h2&gt;
&lt;p&gt;Loki stores all data in a single object storage backend, such as Amazon Simple Storage Service (S3), Google Cloud Storage (GCS), Azure Blob Storage, among others.
This mode uses an adapter called &lt;strong&gt;index shipper&lt;/strong&gt; (or short &lt;strong&gt;shipper&lt;/strong&gt;) to store index (TSDB or BoltDB) files the same way we store chunk files in object storage.
This mode of operation became generally available with Loki 2.0 and is fast, cost-effective, and simple. It is where all current and future development lies.&lt;/p&gt;
&lt;p&gt;Prior to 2.0, Loki had different storage backends for indexes and chunks. For more information, refer to &lt;a href=&#34;../../operations/storage/legacy-storage/&#34;&gt;Legacy storage&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;data-format&#34;&gt;Data format&lt;/h3&gt;
&lt;p&gt;Grafana Loki has two main file types: &lt;strong&gt;index&lt;/strong&gt; and &lt;strong&gt;chunks&lt;/strong&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;a href=&#34;#index-format&#34;&gt;&lt;strong&gt;index&lt;/strong&gt;&lt;/a&gt; is a table of contents of where to find logs for a specific set of labels.&lt;/li&gt;
&lt;li&gt;The &lt;a href=&#34;#chunk-format&#34;&gt;&lt;strong&gt;chunk&lt;/strong&gt;&lt;/a&gt; is a container for log entries for a specific set of labels.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;../chunks_diagram.png&#34;
  alt=&#34;Loki data format: chunks and indexes&#34;/&gt;&lt;/p&gt;
&lt;p&gt;The diagram above shows the high-level overview of the data that is stored in the chunk and data that is stored in the index.&lt;/p&gt;
&lt;h4 id=&#34;index-format&#34;&gt;Index format&lt;/h4&gt;
&lt;p&gt;There are two index formats that are currently supported as single store with index shipper:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../operations/storage/tsdb/&#34;&gt;TSDB&lt;/a&gt; (recommended)&lt;/p&gt;
&lt;p&gt;Time Series Database (or short TSDB) is an &lt;a href=&#34;https://github.com/prometheus/prometheus/blob/main/tsdb/docs/format/index.md&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;index format&lt;/a&gt; originally developed by the maintainers of &lt;a href=&#34;https://github.com/prometheus/prometheus&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Prometheus&lt;/a&gt; for time series (metric) data.&lt;/p&gt;
&lt;p&gt;It is extensible and has many advantages over the deprecated BoltDB index.
New storage features in Loki are solely available when using TSDB.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../operations/storage/boltdb-shipper/&#34;&gt;BoltDB&lt;/a&gt; (deprecated)&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/boltdb/bolt&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Bolt&lt;/a&gt; is a low-level, transactional key-value store written in Go.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;chunk-format&#34;&gt;Chunk format&lt;/h4&gt;
&lt;p&gt;A chunk is a container for log lines of a stream (unique set of labels) of a specific time range.&lt;/p&gt;
&lt;p&gt;The following ASCII diagram describes the chunk format in detail.&lt;/p&gt;

&lt;div class=&#34;code-snippet code-snippet__mini&#34;&gt;&lt;div class=&#34;lang-toolbar__mini&#34;&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet code-snippet__border&#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-none&#34;&gt;----------------------------------------------------------------------------
|                        |                       |                         |
|     MagicNumber(4b)    |     version(1b)       |      encoding (1b)      |
|                        |                       |                         |
----------------------------------------------------------------------------
|                      #structuredMetadata (uvarint)                       |
----------------------------------------------------------------------------
|      len(label-1) (uvarint)      |          label-1 (bytes)              |
----------------------------------------------------------------------------
|      len(label-2) (uvarint)      |          label-2 (bytes)              |
----------------------------------------------------------------------------
|      len(label-n) (uvarint)      |          label-n (bytes)              |
----------------------------------------------------------------------------
|                      checksum(from #structuredMetadata)                  |
----------------------------------------------------------------------------
|           block-1 bytes          |           checksum (4b)               |
----------------------------------------------------------------------------
|           block-2 bytes          |           checksum (4b)               |
----------------------------------------------------------------------------
|           block-n bytes          |           checksum (4b)               |
----------------------------------------------------------------------------
|                           #blocks (uvarint)                              |
----------------------------------------------------------------------------
| #entries(uvarint) | mint, maxt (varint)  | offset, len (uvarint)         |
----------------------------------------------------------------------------
| #entries(uvarint) | mint, maxt (varint)  | offset, len (uvarint)         |
----------------------------------------------------------------------------
| #entries(uvarint) | mint, maxt (varint)  | offset, len (uvarint)         |
----------------------------------------------------------------------------
| #entries(uvarint) | mint, maxt (varint)  | offset, len (uvarint)         |
----------------------------------------------------------------------------
|                          checksum(from #blocks)                          |
----------------------------------------------------------------------------
| #structuredMetadata len (uvarint) | #structuredMetadata offset (uvarint) |
----------------------------------------------------------------------------
|     #blocks len (uvarint)         |       #blocks offset (uvarint)       |
----------------------------------------------------------------------------&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;code&gt;mint&lt;/code&gt; and &lt;code&gt;maxt&lt;/code&gt; describe the minimum and maximum Unix nanosecond timestamp,
respectively.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;structuredMetadata&lt;/code&gt; section stores non-repeated strings. It is used to store label names and label values from
&lt;a href=&#34;../labels/structured-metadata/&#34;&gt;structured metadata&lt;/a&gt;.
Note that the labels strings and lengths within the &lt;code&gt;structuredMetadata&lt;/code&gt; section are stored compressed.&lt;/p&gt;
&lt;h4 id=&#34;block-format&#34;&gt;Block format&lt;/h4&gt;
&lt;p&gt;A block is comprised of a series of entries, each of which is an individual log line.
Note that the bytes of a block are stored compressed. The following is their form when uncompressed:&lt;/p&gt;

&lt;div class=&#34;code-snippet code-snippet__mini&#34;&gt;&lt;div class=&#34;lang-toolbar__mini&#34;&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet code-snippet__border&#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-none&#34;&gt;-----------------------------------------------------------------------------------------------------------------------------------------------
|  ts (varint)  |  len (uvarint)  |  log-1 bytes  |  len(from #symbols)  |  #symbols (uvarint)  |  symbol-1 (uvarint)  | symbol-n*2 (uvarint) |
-----------------------------------------------------------------------------------------------------------------------------------------------
|  ts (varint)  |  len (uvarint)  |  log-2 bytes  |  len(from #symbols)  |  #symbols (uvarint)  |  symbol-1 (uvarint)  | symbol-n*2 (uvarint) |
-----------------------------------------------------------------------------------------------------------------------------------------------
|  ts (varint)  |  len (uvarint)  |  log-3 bytes  |  len(from #symbols)  |  #symbols (uvarint)  |  symbol-1 (uvarint)  | symbol-n*2 (uvarint) |
-----------------------------------------------------------------------------------------------------------------------------------------------
|  ts (varint)  |  len (uvarint)  |  log-n bytes  |  len(from #symbols)  |  #symbols (uvarint)  |  symbol-1 (uvarint)  | symbol-n*2 (uvarint) |
-----------------------------------------------------------------------------------------------------------------------------------------------&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;code&gt;ts&lt;/code&gt; is the Unix nanosecond timestamp of the logs, while &lt;code&gt;len&lt;/code&gt; is the length in
bytes of the log entry.&lt;/p&gt;
&lt;p&gt;Symbols store references to the actual strings containing label names and values in the
&lt;code&gt;structuredMetadata&lt;/code&gt; section of the chunk.&lt;/p&gt;
&lt;h2 id=&#34;write-path&#34;&gt;Write path&lt;/h2&gt;
&lt;p&gt;On a high level, the write path in Loki works as follows:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The distributor receives an HTTP POST request with streams and log lines.&lt;/li&gt;
&lt;li&gt;The distributor hashes each stream contained in the request so it can determine the ingester instance to which it needs to be sent based on the information from the consistent hash ring.&lt;/li&gt;
&lt;li&gt;The distributor sends each stream to the appropriate ingester and its replicas (based on the configured replication factor).&lt;/li&gt;
&lt;li&gt;The ingester receives the stream with log lines and creates a chunk or appends to an existing chunk for the stream&amp;rsquo;s data.
A chunk is unique per tenant and per label set.&lt;/li&gt;
&lt;li&gt;The ingester acknowledges the write.&lt;/li&gt;
&lt;li&gt;The distributor waits for a majority (quorum) of the ingesters to acknowledge their writes.&lt;/li&gt;
&lt;li&gt;The distributor responds with a success (2xx status code) in case it received at least a quorum of acknowledged writes.
or with an error (4xx or 5xx status code) in case write operations failed.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Refer to &lt;a href=&#34;../components/&#34;&gt;Components&lt;/a&gt; for a more detailed description of the components involved in the write path.&lt;/p&gt;
&lt;h2 id=&#34;read-path&#34;&gt;Read path&lt;/h2&gt;
&lt;p&gt;On a high level, the read path in Loki works as follows:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The query frontend receives an HTTP GET request with a LogQL query.&lt;/li&gt;
&lt;li&gt;The query frontend splits the query into sub-queries and passes them to the query scheduler.&lt;/li&gt;
&lt;li&gt;The querier pulls sub-queries from the scheduler.&lt;/li&gt;
&lt;li&gt;The querier passes the query to all ingesters for in-memory data.&lt;/li&gt;
&lt;li&gt;The ingesters return in-memory data matching the query, if any.&lt;/li&gt;
&lt;li&gt;The querier lazily loads data from the backing store and runs the query against it if ingesters returned no or insufficient data.&lt;/li&gt;
&lt;li&gt;The querier iterates over all received data and deduplicates, returning the result of the sub-query to the query frontend.&lt;/li&gt;
&lt;li&gt;The query frontend waits for all sub-queries of a query to be finished and returned by the queriers.&lt;/li&gt;
&lt;li&gt;The query frontend merges the individual results into a final result and return it to the client.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Refer to &lt;a href=&#34;../components/&#34;&gt;Components&lt;/a&gt; for a more detailed description of the components involved in the read path.&lt;/p&gt;
&lt;h2 id=&#34;multi-tenancy&#34;&gt;Multi-tenancy&lt;/h2&gt;
&lt;p&gt;All data, both in memory and in long-term storage, may be partitioned by a
tenant ID, pulled from the &lt;code&gt;X-Scope-OrgID&lt;/code&gt; HTTP header in the request when Grafana Loki
is running in multi-tenant mode. When Loki is &lt;strong&gt;not&lt;/strong&gt; in multi-tenant mode, the
header is ignored and the tenant ID is set to &lt;code&gt;fake&lt;/code&gt;, which will appear in the
index and in stored chunks.&lt;/p&gt;
]]></content><description>&lt;h1 id="loki-architecture">Loki architecture&lt;/h1>
&lt;p>Grafana Loki has a microservices-based architecture and is designed to run as a horizontally scalable, distributed system.
The system has multiple components that can run separately and in parallel. The
Grafana Loki design compiles the code for all components into a single binary or Docker image.
The &lt;code>-target&lt;/code> command-line flag controls which component(s) that binary will behave as.&lt;/p></description></item><item><title>Loki components</title><link>https://grafana.com/docs/loki/v3.7.x/get-started/components/</link><pubDate>Thu, 09 Apr 2026 02:28:18 +0000</pubDate><guid>https://grafana.com/docs/loki/v3.7.x/get-started/components/</guid><content><![CDATA[&lt;h1 id=&#34;loki-components&#34;&gt;Loki components&lt;/h1&gt;
&lt;iframe width=&#34;560&#34; height=&#34;315&#34; src=&#39;https://www.youtube.com/embed/_hv4i84Z68s&#39; title=&#34;YouTube video player&#34; frameborder=&#34;0&#34; allow=&#34;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&#34; referrerpolicy=&#34;strict-origin-when-cross-origin&#34; allowfullscreen&gt;&lt;/iframe&gt;
&lt;p&gt;Loki is a modular system that contains many components that can either be run together (in &amp;ldquo;single binary&amp;rdquo; mode with target &lt;code&gt;all&lt;/code&gt;),
in logical groups (in &amp;ldquo;simple scalable deployment&amp;rdquo; mode with targets &lt;code&gt;read&lt;/code&gt;, &lt;code&gt;write&lt;/code&gt;, &lt;code&gt;backend&lt;/code&gt;), or individually (in &amp;ldquo;microservice&amp;rdquo; mode).
For more information see 
    &lt;a href=&#34;/docs/loki/v3.7.x/get-started/deployment-modes/&#34;&gt;Deployment modes&lt;/a&gt;.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Component&lt;/th&gt;
              &lt;th&gt;&lt;em&gt;individual&lt;/em&gt;&lt;/th&gt;
              &lt;th&gt;&lt;code&gt;all&lt;/code&gt;&lt;/th&gt;
              &lt;th&gt;&lt;code&gt;read&lt;/code&gt;&lt;/th&gt;
              &lt;th&gt;&lt;code&gt;write&lt;/code&gt;&lt;/th&gt;
              &lt;th&gt;&lt;code&gt;backend&lt;/code&gt;&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;a href=&#34;#distributor&#34;&gt;Distributor&lt;/a&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;a href=&#34;#ingester&#34;&gt;Ingester&lt;/a&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;a href=&#34;#query-frontend&#34;&gt;Query Frontend&lt;/a&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;a href=&#34;#query-scheduler&#34;&gt;Query Scheduler&lt;/a&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;a href=&#34;#querier&#34;&gt;Querier&lt;/a&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;a href=&#34;#index-gateway&#34;&gt;Index Gateway&lt;/a&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;a href=&#34;#compactor&#34;&gt;Compactor&lt;/a&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;a href=&#34;#ruler&#34;&gt;Ruler&lt;/a&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;a href=&#34;#pattern-ingester&#34;&gt;Pattern ingester&lt;/a&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;a href=&#34;#bloom-planner&#34;&gt;Bloom Planner (Experimental)&lt;/a&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;a href=&#34;#bloom-builder&#34;&gt;Bloom Builder (Experimental)&lt;/a&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;a href=&#34;#bloom-gateway&#34;&gt;Bloom Gateway (Experimental)&lt;/a&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;&lt;/td&gt;
              &lt;td&gt;x&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;p&gt;This page describes the responsibilities of each of these components.&lt;/p&gt;
&lt;h2 id=&#34;distributor&#34;&gt;Distributor&lt;/h2&gt;
&lt;p&gt;The &lt;strong&gt;distributor&lt;/strong&gt; service is responsible for handling incoming push requests from
clients. It&amp;rsquo;s the first step in the write path for log data. Once the
distributor receives a set of streams in an HTTP request, each stream is validated for correctness
and to ensure that it is within the configured tenant (or global) limits. Each valid stream
is then sent to &lt;code&gt;n&lt;/code&gt; &lt;a href=&#34;#ingester&#34;&gt;ingesters&lt;/a&gt; in parallel, where &lt;code&gt;n&lt;/code&gt; is the &lt;a href=&#34;#replication-factor&#34;&gt;replication factor&lt;/a&gt; for data.
The distributor determines the ingesters to which it sends a stream to using &lt;a href=&#34;#hashing&#34;&gt;consistent hashing&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;A load balancer must sit in front of the distributor to properly balance incoming traffic to them.
In Kubernetes, the service load balancer provides this service.&lt;/p&gt;
&lt;p&gt;The distributor is a stateless component. This makes it easy to scale and offload as much work as possible from the ingesters, which are the most critical component on the write path.
The ability to independently scale these validation operations means that Loki can also protect itself against denial of service attacks that could otherwise overload the ingesters.
It also allows us to fan-out writes according to the &lt;a href=&#34;#replication-factor&#34;&gt;replication factor&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;validation&#34;&gt;Validation&lt;/h3&gt;
&lt;p&gt;The first step the distributor takes is to ensure that all incoming data is according to specification. This includes things like checking that the labels are valid Prometheus labels as well as ensuring the timestamps aren&amp;rsquo;t too old or too new or the log lines aren&amp;rsquo;t too long.&lt;/p&gt;
&lt;h3 id=&#34;preprocessing&#34;&gt;Preprocessing&lt;/h3&gt;
&lt;p&gt;Currently, the only way the distributor mutates incoming data is by normalizing labels. What this means is making &lt;code&gt;{foo=&amp;quot;bar&amp;quot;, bazz=&amp;quot;buzz&amp;quot;}&lt;/code&gt; equivalent to &lt;code&gt;{bazz=&amp;quot;buzz&amp;quot;, foo=&amp;quot;bar&amp;quot;}&lt;/code&gt;, or in other words, sorting the labels. This allows Loki to cache and hash them deterministically.&lt;/p&gt;
&lt;h3 id=&#34;rate-limiting&#34;&gt;Rate limiting&lt;/h3&gt;
&lt;p&gt;The distributor can also rate-limit incoming logs based on the maximum data ingest rate per tenant. It does this by checking a per-tenant limit and dividing it by the current number of distributors. This allows the rate limit to be specified per tenant at the cluster level and enables us to scale the distributors up or down and have the per-distributor limit adjust accordingly. For instance, say we have 10 distributors and tenant A has a 10MB rate limit. Each distributor will allow up to 1MB/s before limiting. Now, say another large tenant joins the cluster and we need to spin up 10 more distributors. The now 20 distributors will adjust their rate limits for tenant A to &lt;code&gt;(10MB / 20 distributors) = 500KB/s&lt;/code&gt;. This is how global limits allow much simpler and safer operation of the Loki cluster.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;The distributor uses the &lt;code&gt;ring&lt;/code&gt; component under the hood to register itself amongst its peers and get the total number of active distributors. This is a different &amp;ldquo;key&amp;rdquo; than the ingesters use in the ring and comes from the distributor&amp;rsquo;s own 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#distributor&#34;&gt;ring configuration&lt;/a&gt;.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;h3 id=&#34;forwarding&#34;&gt;Forwarding&lt;/h3&gt;
&lt;p&gt;Once the distributor has performed all of its validation duties, it forwards data to the ingester component which is ultimately responsible for acknowledging the write operation.&lt;/p&gt;
&lt;h4 id=&#34;replication-factor&#34;&gt;Replication factor&lt;/h4&gt;
&lt;p&gt;In order to mitigate the chance of &lt;em&gt;losing&lt;/em&gt; data on any single ingester, the distributor will forward writes to a &lt;em&gt;replication factor&lt;/em&gt; of them. Generally, the replication factor is &lt;code&gt;3&lt;/code&gt;. Replication allows for ingester restarts and rollouts without failing writes and adds additional protection from data loss for some scenarios. Loosely, for each label set (called a &lt;em&gt;stream&lt;/em&gt;) that is pushed to a distributor, it will hash the labels and use the resulting value to look up &lt;code&gt;replication_factor&lt;/code&gt; ingesters in the &lt;code&gt;ring&lt;/code&gt; (which is a subcomponent that exposes a &lt;a href=&#34;https://en.wikipedia.org/wiki/Distributed_hash_table&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;distributed hash table&lt;/a&gt;). It will then try to write the same data to all of them. This will generate an error if less than a &lt;em&gt;quorum&lt;/em&gt; of writes succeeds. A quorum is defined as &lt;code&gt;floor( replication_factor / 2 ) &#43; 1&lt;/code&gt;. So, for our &lt;code&gt;replication_factor&lt;/code&gt; of &lt;code&gt;3&lt;/code&gt;, we require that two writes succeed. If less than two writes succeed, the distributor returns an error and the write operation will be retried.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-caution&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Caution&lt;/p&gt;&lt;p&gt;If a write is acknowledged by 2 out of 3 ingesters, we can tolerate the loss of one ingester but not two, as this would result in data loss.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;The replication factor is not the only thing that prevents data loss, though, and its main purpose is to allow writes to continue uninterrupted during rollouts and restarts. The &lt;a href=&#34;#ingester&#34;&gt;ingester component&lt;/a&gt; now includes a &lt;a href=&#34;https://en.wikipedia.org/wiki/Write-ahead_logging&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;write ahead log&lt;/a&gt; (WAL) which persists incoming writes to disk to ensures they are not lost as long as the disk isn&amp;rsquo;t corrupted. The complementary nature of the replication factor and WAL ensures data isn&amp;rsquo;t lost unless there are significant failures in both mechanisms (that is, multiple ingesters die and lose/corrupt their disks).&lt;/p&gt;
&lt;h3 id=&#34;hashing&#34;&gt;Hashing&lt;/h3&gt;
&lt;p&gt;Distributors use consistent hashing in conjunction with a configurable
replication factor to determine which instances of the ingester service should
receive a given stream.&lt;/p&gt;
&lt;p&gt;A stream is a set of logs associated to a tenant and a unique label set. The
stream is hashed using both the tenant ID and the label set and then the hash is
used to find the ingesters to send the stream to.&lt;/p&gt;
&lt;p&gt;A hash ring, maintained by peer-to-peer communication using the &lt;a href=&#34;https://github.com/hashicorp/memberlist&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Memberlist&lt;/a&gt; protocol,
or stored in a Key-Value store such as &lt;a href=&#34;https://www.consul.io&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Consul&lt;/a&gt; is used to achieve
consistent hashing; all &lt;a href=&#34;#ingester&#34;&gt;ingesters&lt;/a&gt; register themselves into the hash
ring with a set of tokens they own. Each token is a random unsigned 32-bit
number. Along with a set of tokens, ingesters register their state into the
hash ring. The state &lt;code&gt;JOINING&lt;/code&gt;, and &lt;code&gt;ACTIVE&lt;/code&gt; may all receive write requests, while
&lt;code&gt;ACTIVE&lt;/code&gt; and &lt;code&gt;LEAVING&lt;/code&gt; ingesters may receive read requests. When doing a hash
lookup, distributors only use tokens for ingesters who are in the appropriate
state for the request.&lt;/p&gt;
&lt;p&gt;To do the hash lookup, distributors find the smallest appropriate token whose
value is larger than the hash of the stream. When the replication factor is
larger than 1, the next subsequent tokens (clockwise in the ring) that belong to
different ingesters will also be included in the result.&lt;/p&gt;
&lt;p&gt;The effect of this hash setup is that each token that an ingester owns is
responsible for a range of hashes. If there are three tokens with values 0, 25,
and 50, then a hash of 3 would be given to the ingester that owns the token 25;
the ingester owning token 25 is responsible for the hash range of 1-25.&lt;/p&gt;
&lt;h3 id=&#34;quorum-consistency&#34;&gt;Quorum consistency&lt;/h3&gt;
&lt;p&gt;Since all distributors share access to the same hash ring, write requests can be
sent to any distributor.&lt;/p&gt;
&lt;p&gt;To ensure consistent query results, Loki uses
&lt;a href=&#34;https://www.cs.princeton.edu/courses/archive/fall15/cos518/studpres/dynamo.pdf&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Dynamo-style&lt;/a&gt;
quorum consistency on reads and writes. This means that the distributor will wait
for a positive response of at least one half plus one of the ingesters to send
the sample to before responding to the client that initiated the send.&lt;/p&gt;
&lt;h2 id=&#34;ingester&#34;&gt;Ingester&lt;/h2&gt;
&lt;p&gt;The &lt;strong&gt;ingester&lt;/strong&gt; service is responsible for persisting data and shipping it to long-term
storage (Amazon Simple Storage Service, Google Cloud Storage, Azure Blob Storage, etc.)
on the write path, and returning recently ingested, in-memory log data for queries on the read path.&lt;/p&gt;
&lt;p&gt;Ingesters contain a &lt;em&gt;lifecycler&lt;/em&gt; which manages the lifecycle of an ingester in
the hash ring. Each ingester has a state of either &lt;code&gt;PENDING&lt;/code&gt;, &lt;code&gt;JOINING&lt;/code&gt;,
&lt;code&gt;ACTIVE&lt;/code&gt;, &lt;code&gt;LEAVING&lt;/code&gt;, or &lt;code&gt;UNHEALTHY&lt;/code&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;PENDING&lt;/code&gt; is an Ingester&amp;rsquo;s state when it is waiting for a &lt;a href=&#34;#handoff&#34;&gt;handoff&lt;/a&gt; from
another ingester that is &lt;code&gt;LEAVING&lt;/code&gt;. This only applies for legacy deployment modes.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Handoff is a deprecated behavior mainly used in stateless deployments of ingesters, which is discouraged. Instead, it&amp;rsquo;s recommended using a stateful deployment model together with the 
    &lt;a href=&#34;/docs/loki/v3.7.x/operations/storage/wal/&#34;&gt;write ahead log&lt;/a&gt;.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;JOINING&lt;/code&gt; is an Ingester&amp;rsquo;s state when it is currently inserting its tokens
into the ring and initializing itself. It may receive write requests for
tokens it owns.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;ACTIVE&lt;/code&gt; is an Ingester&amp;rsquo;s state when it is fully initialized. It may receive
both write and read requests for tokens it owns.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;LEAVING&lt;/code&gt; is an Ingester&amp;rsquo;s state when it is shutting down. It may receive
read requests for data it still has in memory.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;UNHEALTHY&lt;/code&gt; is an Ingester&amp;rsquo;s state when it has failed to heartbeat.
&lt;code&gt;UNHEALTHY&lt;/code&gt; is set by the distributor when it periodically checks the ring.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Each log stream that an ingester receives is built up into a set of many
&amp;ldquo;chunks&amp;rdquo; in memory and flushed to the backing storage backend at a configurable
interval.&lt;/p&gt;
&lt;p&gt;Chunks are compressed and marked as read-only when:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The current chunk has reached capacity (a configurable value).&lt;/li&gt;
&lt;li&gt;Too much time has passed without the current chunk being updated&lt;/li&gt;
&lt;li&gt;A flush occurs.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Whenever a chunk is compressed and marked as read-only, a writable chunk takes
its place.&lt;/p&gt;
&lt;p&gt;If an ingester process crashes or exits abruptly, all the data that has not yet
been flushed will be lost. Loki is usually configured to replicate multiple
replicas (usually 3) of each log to mitigate this risk.&lt;/p&gt;
&lt;p&gt;When a flush occurs to a persistent storage provider, the chunk is hashed based
on its tenant, labels, and contents. This means that multiple ingesters with the
same copy of data will not write the same data to the backing store twice, but
if any write failed to one of the replicas, multiple differing chunk objects
will be created in the backing store. See &lt;a href=&#34;#querier&#34;&gt;Querier&lt;/a&gt; for how data is
deduplicated.&lt;/p&gt;
&lt;h3 id=&#34;timestamp-ordering&#34;&gt;Timestamp Ordering&lt;/h3&gt;
&lt;p&gt;Loki is configured to 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#accept-out-of-order-writes&#34;&gt;accept out-of-order writes&lt;/a&gt; by default.&lt;/p&gt;
&lt;p&gt;When not configured to accept out-of-order writes, the ingester validates that ingested log lines are in order. When an
ingester receives a log line that doesn&amp;rsquo;t follow the expected order, the line
is rejected and an error is returned to the user.&lt;/p&gt;
&lt;p&gt;The ingester validates that log lines are received in
timestamp-ascending order. Each log has a timestamp that occurs at a later
time than the log before it. When the ingester receives a log that does not
follow this order, the log line is rejected and an error is returned.&lt;/p&gt;
&lt;p&gt;Logs from each unique set of labels are built up into &amp;ldquo;chunks&amp;rdquo; in memory and
then flushed to the backing storage backend.&lt;/p&gt;
&lt;p&gt;If an ingester process crashes or exits abruptly, all the data that has not yet
been flushed could be lost. Loki is usually configured with a 
    &lt;a href=&#34;/docs/loki/v3.7.x/operations/storage/wal/&#34;&gt;Write Ahead Log&lt;/a&gt; which can be &lt;em&gt;replayed&lt;/em&gt; on restart as well as with a &lt;code&gt;replication_factor&lt;/code&gt; (usually 3) of each log to mitigate this risk.&lt;/p&gt;
&lt;p&gt;When not configured to accept out-of-order writes,
all lines pushed to Loki for a given stream (unique combination of
labels) must have a newer timestamp than the line received before it. There are,
however, two cases for handling logs for the same stream with identical
nanosecond timestamps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;If the incoming line exactly matches the previously received line (matching
both the previous timestamp and log text), the incoming line will be treated
as an exact duplicate and ignored.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If the incoming line has the same timestamp as the previous line but
different content, the log line is accepted. This means it is possible to
have two different log lines for the same timestamp.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;handoff&#34;&gt;Handoff&lt;/h3&gt;


&lt;div class=&#34;admonition admonition-warning&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Warning&lt;/p&gt;&lt;p&gt;Handoff is deprecated behavior mainly used in stateless deployments of ingesters, which is discouraged. Instead, it&amp;rsquo;s recommended using a stateful deployment model together with the 
    &lt;a href=&#34;/docs/loki/latestv3.7.x/operations/storage/wal/&#34;&gt;write ahead log&lt;/a&gt;.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;By default, when an ingester is shutting down and tries to leave the hash ring,
it will wait to see if a new ingester tries to enter before flushing and will
try to initiate a handoff. The handoff will transfer all of the tokens and
in-memory chunks owned by the leaving ingester to the new ingester.&lt;/p&gt;
&lt;p&gt;Before joining the hash ring, ingesters will wait in &lt;code&gt;PENDING&lt;/code&gt; state for a
handoff to occur. After a configurable timeout, ingesters in the &lt;code&gt;PENDING&lt;/code&gt; state
that have not received a transfer will join the ring normally, inserting a new
set of tokens.&lt;/p&gt;
&lt;p&gt;This process is used to avoid flushing all chunks when shutting down, which is a
slow process.&lt;/p&gt;
&lt;h3 id=&#34;filesystem-support&#34;&gt;Filesystem support&lt;/h3&gt;
&lt;p&gt;While ingesters do support writing to the filesystem through BoltDB, this only
works in single-process mode as &lt;a href=&#34;#querier&#34;&gt;queriers&lt;/a&gt; need access to the same
back-end store and BoltDB only allows one process to have a lock on the DB at a
given time.&lt;/p&gt;
&lt;h2 id=&#34;query-frontend&#34;&gt;Query frontend&lt;/h2&gt;
&lt;p&gt;The &lt;strong&gt;query frontend&lt;/strong&gt; is an &lt;strong&gt;optional service&lt;/strong&gt; providing the querier&amp;rsquo;s API endpoints and can be used to accelerate the read path. When the query frontend is in place, incoming query requests should be directed to the query frontend instead of the queriers. The querier service will be still required within the cluster, in order to execute the actual queries.&lt;/p&gt;
&lt;p&gt;The query frontend internally performs some query adjustments and holds queries in an internal queue. In this setup, queriers act as workers which pull jobs from the queue, execute them, and return them to the query frontend for aggregation. Queriers need to be configured with the query frontend address (via the &lt;code&gt;-querier.frontend-address&lt;/code&gt; CLI flag) in order to allow them to connect to the query frontends.&lt;/p&gt;
&lt;p&gt;Query frontends are &lt;strong&gt;stateless&lt;/strong&gt;. However, due to how the internal queue works, it&amp;rsquo;s recommended to run a few query frontend replicas to reap the benefit of fair scheduling. Two replicas should suffice in most cases.&lt;/p&gt;
&lt;h3 id=&#34;queueing&#34;&gt;Queueing&lt;/h3&gt;
&lt;p&gt;If no separate &lt;a href=&#34;#query-scheduler&#34;&gt;query scheduler&lt;/a&gt; component is used, the query frontend will also perform basic query queueing.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ensure that large queries, that could cause an out-of-memory (OOM) error in the querier, will be retried on failure. This allows administrators to under-provision memory for queries, or optimistically run more small queries in parallel, which helps to reduce the total cost of ownership (TCO).&lt;/li&gt;
&lt;li&gt;Prevent multiple large requests from being convoyed on a single querier by distributing them across all queriers using a first-in/first-out queue (FIFO).&lt;/li&gt;
&lt;li&gt;Prevent a single tenant from denial-of-service-ing (DOSing) other tenants by fairly scheduling queries between tenants.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;splitting&#34;&gt;Splitting&lt;/h3&gt;
&lt;p&gt;The query frontend splits larger queries into multiple smaller queries, executing these queries in parallel on downstream queriers and stitching the results back together again. This prevents large (multi-day, etc) queries from causing out of memory issues in a single querier and helps to execute them faster.&lt;/p&gt;
&lt;h3 id=&#34;caching&#34;&gt;Caching&lt;/h3&gt;
&lt;h4 id=&#34;metric-queries&#34;&gt;Metric queries&lt;/h4&gt;
&lt;p&gt;The query frontend supports caching metric query results and reuses them on subsequent queries. If the cached results are incomplete, the query frontend calculates the required sub-queries and executes them in parallel on downstream queriers. The query frontend can optionally align queries with their step parameter to improve the cacheability of the query results. The result cache is compatible with any Loki caching backend (currently Memcached, Redis, and an in-memory cache).&lt;/p&gt;
&lt;h4 id=&#34;log-queries&#34;&gt;Log queries&lt;/h4&gt;
&lt;p&gt;The query frontend also supports caching of log queries in form of a negative cache.
This means that instead of caching the log results for quantized time ranges, Loki only caches empty results for quantized time ranges.
This is more efficient than caching actual results because log queries are limited (usually 1000 results)
and if you have a query over a long time range that matches only a few lines, and you only cache actual results,
you&amp;rsquo;d still need to process a lot of data in addition to the data from the results cache in order to verify that nothing else matches.&lt;/p&gt;
&lt;h4 id=&#34;index-stats-queries&#34;&gt;Index stats queries&lt;/h4&gt;
&lt;p&gt;The query frontend caches index stats query results similar to the &lt;a href=&#34;#metric-queries&#34;&gt;metric query&lt;/a&gt; results.
This cache is only applicable when using single store TSDB.&lt;/p&gt;
&lt;h4 id=&#34;log-volume-queries&#34;&gt;Log volume queries&lt;/h4&gt;
&lt;p&gt;The query frontend caches log volume query results similar to the &lt;a href=&#34;#metric-queries&#34;&gt;metric query&lt;/a&gt; results.
This cache is only applicable when using single store TSDB.&lt;/p&gt;
&lt;h2 id=&#34;query-scheduler&#34;&gt;Query scheduler&lt;/h2&gt;
&lt;p&gt;The &lt;strong&gt;query scheduler&lt;/strong&gt; is an &lt;strong&gt;optional service&lt;/strong&gt; providing more 
    &lt;a href=&#34;/docs/loki/v3.7.x/operations/query-fairness/&#34;&gt;advanced queuing functionality&lt;/a&gt; than the &lt;a href=&#34;#query-frontend&#34;&gt;query frontend&lt;/a&gt;.
When using this component in the Loki deployment, query frontend pushes split up queries to the query scheduler which enqueues them in an internal in-memory queue.
There is a queue for each tenant to guarantee the query fairness across all tenants.
The queriers that connect to the query scheduler act as workers that pull their jobs from the queue, execute them, and return them to the query frontend for aggregation. Queriers therefore need to be configured with the query scheduler address (via the &lt;code&gt;-querier.scheduler-address&lt;/code&gt; CLI flag) in order to allow them to connect to the query scheduler.&lt;/p&gt;
&lt;p&gt;Query schedulers are &lt;strong&gt;stateless&lt;/strong&gt;. However, due to the in-memory queue, it&amp;rsquo;s recommended to run more than one replica to keep the benefit of high availability. Two replicas should suffice in most cases.&lt;/p&gt;
&lt;h2 id=&#34;querier&#34;&gt;Querier&lt;/h2&gt;
&lt;p&gt;The &lt;strong&gt;querier&lt;/strong&gt; service is responsible for executing 
    &lt;a href=&#34;/docs/loki/v3.7.x/query/&#34;&gt;Log Query Language (LogQL)&lt;/a&gt; queries.
The querier can handle HTTP requests from the client directly (in &amp;ldquo;single binary&amp;rdquo; mode, or as part of the read path in &amp;ldquo;simple scalable deployment&amp;rdquo;)
or pull subqueries from the query frontend or query scheduler (in &amp;ldquo;microservice&amp;rdquo; mode).&lt;/p&gt;
&lt;p&gt;It fetches log data from both the ingesters and from long-term storage.
Queriers query all ingesters for in-memory data before falling back to
running the same query against the backend store. Because of the replication
factor, it is possible that the querier may receive duplicate data. To resolve
this, the querier internally &lt;strong&gt;deduplicates&lt;/strong&gt; data that has the same nanosecond
timestamp, label set, and log message.&lt;/p&gt;
&lt;h2 id=&#34;index-gateway&#34;&gt;Index Gateway&lt;/h2&gt;
&lt;p&gt;The &lt;strong&gt;index gateway&lt;/strong&gt; service is responsible for handling and serving metadata queries.
Metadata queries are queries that look up data from the index. The index gateway is only used by &amp;ldquo;shipper stores&amp;rdquo;,
such as 
    &lt;a href=&#34;/docs/loki/v3.7.x/operations/storage/tsdb/&#34;&gt;single store TSDB&lt;/a&gt; or 
    &lt;a href=&#34;/docs/loki/v3.7.x/operations/storage/boltdb-shipper/&#34;&gt;single store BoltDB&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The query frontend queries the index gateway for the log volume of queries so it can make a decision on how to shard the queries.
The queriers query the index gateway for chunk references for a given query so they know which chunks to fetch and query.&lt;/p&gt;
&lt;p&gt;The index gateway can run in &lt;code&gt;simple&lt;/code&gt; or &lt;code&gt;ring&lt;/code&gt; mode. In &lt;code&gt;simple&lt;/code&gt; mode, each index gateway instance serves all indexes from all tenants.
In &lt;code&gt;ring&lt;/code&gt; mode, index gateways use a consistent hash ring to distribute and shard the indexes per tenant amongst available instances.&lt;/p&gt;
&lt;h2 id=&#34;compactor&#34;&gt;Compactor&lt;/h2&gt;
&lt;p&gt;The &lt;strong&gt;compactor&lt;/strong&gt; service is used by &amp;ldquo;shipper stores&amp;rdquo;, such as 
    &lt;a href=&#34;/docs/loki/v3.7.x/operations/storage/tsdb/&#34;&gt;single store TSDB&lt;/a&gt;
or 
    &lt;a href=&#34;/docs/loki/v3.7.x/operations/storage/boltdb-shipper/&#34;&gt;single store BoltDB&lt;/a&gt;, to compact the multiple index files produced by the ingesters
and shipped to object storage into single index files per day and tenant. This makes index lookups more efficient.&lt;/p&gt;
&lt;p&gt;To do so, the compactor downloads the files from object storage in a regular interval, merges them into a single one,
uploads the newly created index, and cleans up the old files.&lt;/p&gt;
&lt;p&gt;Additionally, the compactor is also responsible for 
    &lt;a href=&#34;/docs/loki/v3.7.x/operations/storage/retention/&#34;&gt;log retention&lt;/a&gt; and 
    &lt;a href=&#34;/docs/loki/v3.7.x/operations/storage/logs-deletion/&#34;&gt;log deletion&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In a Loki deployment, the compactor service is usually run as a single instance.&lt;/p&gt;
&lt;h2 id=&#34;ruler&#34;&gt;Ruler&lt;/h2&gt;
&lt;p&gt;The &lt;strong&gt;ruler&lt;/strong&gt; service manages and evaluates rule and/or alert expressions provided in a rule configuration. The rule configuration
is stored in object storage (or alternatively on local file system) and can be managed via the ruler API or directly by uploading
the files to object storage.&lt;/p&gt;
&lt;p&gt;Alternatively, the ruler can also delegate rule evaluation to the query frontend.
This mode is called remote rule evaluation and is used to gain the advantages of query splitting, query sharding, and caching
from the query frontend.&lt;/p&gt;
&lt;p&gt;When running multiple rulers, they use a consistent hash ring to distribute rule groups amongst available ruler instances.&lt;/p&gt;
&lt;h2 id=&#34;pattern-ingester&#34;&gt;Pattern ingester&lt;/h2&gt;
&lt;p&gt;The optional &lt;strong&gt;pattern ingester&lt;/strong&gt; component receives log data from the ingesters and scans the logs to detect and aggregate patterns. This can be useful for understanding the structure of your logs at scale. The pattern ingester is used by the pattern feature in Logs Drilldown, which lets you detect similar log lines and add or exclude them from your search.&lt;/p&gt;
&lt;p&gt;The ingester uses a drain algorithm to identify related logs that share the same pattern, and maintain their counts over time. Patterns consist of a number, a string, and a Loki series identifier.&lt;/p&gt;
&lt;p&gt;The pattern ingester exposes a query API, so you can fetch detected patterns. This API is used by the Patterns tab in the Grafana Logs Drilldown plugin.&lt;/p&gt;
&lt;p&gt;This component is disabled by default and must be enabled in your &lt;a href=&#34;/docs/loki/latest/configure/#supported-contents-and-default-values-of-lokiyaml&#34;&gt;Loki config file&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;bloom-planner&#34;&gt;Bloom Planner&lt;/h2&gt;


&lt;div class=&#34;admonition admonition-warning&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Warning&lt;/p&gt;&lt;p&gt;This feature is an &lt;a href=&#34;/docs/release-life-cycle/&#34;&gt;experimental feature&lt;/a&gt;. Engineering and on-call support is not available.
No SLA is provided.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;The Bloom Planner service is responsible for planning the tasks for blooms creation. It runs as a singleton and provides a queue
from which tasks are pulled by the Bloom Builders. The planning runs periodically and takes into account what blooms have already
been built for a given day and tenant and what series need to be newly added.&lt;/p&gt;
&lt;p&gt;This service is also used to apply blooms retention.&lt;/p&gt;
&lt;h2 id=&#34;bloom-builder&#34;&gt;Bloom Builder&lt;/h2&gt;


&lt;div class=&#34;admonition admonition-warning&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Warning&lt;/p&gt;&lt;p&gt;This feature is an &lt;a href=&#34;/docs/release-life-cycle/&#34;&gt;experimental feature&lt;/a&gt;. Engineering and on-call support is not available.
No SLA is provided.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;The Bloom Builder service is responsible for processing the tasks created by the Bloom Planner.
The Bloom Builder creates bloom blocks from structured metadata of log entries.
The resulting blooms are grouped in bloom blocks spanning multiple series and chunks from a given day.
This component also builds metadata files to track which blocks are available for each series and TSDB index file.&lt;/p&gt;
&lt;p&gt;The service is stateless and horizontally scalable.&lt;/p&gt;
&lt;h2 id=&#34;bloom-gateway&#34;&gt;Bloom Gateway&lt;/h2&gt;


&lt;div class=&#34;admonition admonition-warning&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Warning&lt;/p&gt;&lt;p&gt;This feature is an &lt;a href=&#34;/docs/release-life-cycle/&#34;&gt;experimental feature&lt;/a&gt;. Engineering and on-call support is not available.
No SLA is provided.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;The Bloom Gateway service is responsible for handling and serving chunks filtering requests.
The index gateway queries the Bloom Gateway when computing chunk references, or when computing shards for a given query.
The gateway service takes a list of chunks and a filtering expression and matches them against the blooms,
filtering out any chunks that do not match the given label filter expression.&lt;/p&gt;
&lt;p&gt;The service is horizontally scalable. When running multiple instances, the client (Index Gateway) shards requests
across instances based on the hash of the bloom blocks that are referenced.&lt;/p&gt;
]]></content><description>&lt;h1 id="loki-components">Loki components&lt;/h1>
&lt;iframe width="560" height="315" src='https://www.youtube.com/embed/_hv4i84Z68s' title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen>&lt;/iframe>
&lt;p>Loki is a modular system that contains many components that can either be run together (in &amp;ldquo;single binary&amp;rdquo; mode with target &lt;code>all&lt;/code>),
in logical groups (in &amp;ldquo;simple scalable deployment&amp;rdquo; mode with targets &lt;code>read&lt;/code>, &lt;code>write&lt;/code>, &lt;code>backend&lt;/code>), or individually (in &amp;ldquo;microservice&amp;rdquo; mode).
For more information see
&lt;a href="/docs/loki/v3.7.x/get-started/deployment-modes/">Deployment modes&lt;/a>.&lt;/p></description></item><item><title>Loki deployment modes</title><link>https://grafana.com/docs/loki/v3.7.x/get-started/deployment-modes/</link><pubDate>Thu, 09 Apr 2026 02:28:18 +0000</pubDate><guid>https://grafana.com/docs/loki/v3.7.x/get-started/deployment-modes/</guid><content><![CDATA[&lt;h1 id=&#34;loki-deployment-modes&#34;&gt;Loki deployment modes&lt;/h1&gt;
&lt;p&gt;Loki is a distributed system consisting of many microservices. It also has a unique build model where all of those microservices exist within the same binary.&lt;/p&gt;
&lt;p&gt;You can configure the behavior of the single binary with the &lt;code&gt;-target&lt;/code&gt; command-line flag to specify which microservices will run on startup. You can further configure each of the components in the &lt;code&gt;loki.yaml&lt;/code&gt; file.&lt;/p&gt;
&lt;p&gt;Because Loki decouples the data it stores from the software which ingests and queries it, you can easily redeploy a cluster under a different mode as your needs change, with minimal or no configuration changes.&lt;/p&gt;
&lt;h2 id=&#34;monolithic-mode&#34;&gt;Monolithic mode&lt;/h2&gt;
&lt;p&gt;The simplest mode of operation is the monolithic deployment mode. You enable monolithic mode by setting the &lt;code&gt;-target=all&lt;/code&gt; command line parameter. This mode runs all of Loki’s microservice components inside a single process as a single binary or Docker image.&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;../monolithic-mode.png&#34;
  alt=&#34;monolithic mode diagram&#34;/&gt;&lt;/p&gt;
&lt;p&gt;Monolithic mode is useful for getting started quickly to experiment with Loki, as well as for small read/write volumes of up to approximately 20GB per day.&lt;/p&gt;
&lt;p&gt;You can horizontally scale a monolithic mode deployment to more instances by using a shared object store, and by configuring the 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#common&#34;&gt;&lt;code&gt;ring&lt;/code&gt; section&lt;/a&gt; of the &lt;code&gt;loki.yaml&lt;/code&gt; file to share state between all instances, but the recommendation is to use microservices deployment mode if you need to scale your deployment.&lt;/p&gt;
&lt;p&gt;You can configure high availability by running two Loki instances using &lt;code&gt;memberlist_config&lt;/code&gt; configuration and a shared object store and setting the &lt;code&gt;replication_factor&lt;/code&gt; to &lt;code&gt;3&lt;/code&gt;. You route traffic to all the Loki instances in a round robin fashion.&lt;/p&gt;
&lt;p&gt;Query parallelization is limited by the number of instances and the setting &lt;code&gt;max_query_parallelism&lt;/code&gt; which is defined in the &lt;code&gt;loki.yaml&lt;/code&gt; file.&lt;/p&gt;
&lt;h2 id=&#34;simple-scalable&#34;&gt;Simple Scalable&lt;/h2&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Simple Scalable Deployment (SSD) mode is being deprecated. The timeline for the deprecation is to be determined (TBD), but will happen before Loki 4.0 is released.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;The simple scalable deployment is the default configuration installed by the &lt;a href=&#34;../../setup/install/helm/&#34;&gt;Loki Helm Chart&lt;/a&gt;. This deployment mode is the easiest way to deploy Loki at scale. It strikes a balance between deploying in &lt;a href=&#34;#monolithic-mode&#34;&gt;monolithic mode&lt;/a&gt; or deploying each component as a &lt;a href=&#34;#microservices-mode&#34;&gt;separate microservice&lt;/a&gt;. Simple scalable deployment is also referred to as SSD.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;This deployment mode is sometimes referred to by the acronym SSD for simple scalable deployment, not to be confused with solid state drives. Loki uses an object store.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;Loki’s simple scalable deployment mode separates execution paths into read, write, and backend targets. These targets can be scaled independently, letting you customize your Loki deployment to meet your business needs for log ingestion and log query so that your infrastructure costs better match how you use Loki.&lt;/p&gt;
&lt;p&gt;The simple scalable deployment mode can scale close to a TB of logs per day. Even though scaling it further may be possible, at that scale, the microservices mode will be a better choice in terms of scalability and ease of operations&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;../scalable-monolithic-mode.png&#34;
  alt=&#34;Simple scalable mode diagram&#34;/&gt;&lt;/p&gt;
&lt;p&gt;The three execution paths in simple scalable mode are each activated by appending the following arguments to Loki on startup:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;-target=write&lt;/code&gt; - The write target is stateful and is controlled by a Kubernetes StatefulSet. It contains the following components:
&lt;ul&gt;
&lt;li&gt;Distributor&lt;/li&gt;
&lt;li&gt;Ingester&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-target=read&lt;/code&gt; - The read target is stateless and can be run as a Kubernetes Deployment that can be scaled automatically (Note that in the official helm chart it is currently deployed as a stateful set). It contains the following components:
&lt;ul&gt;
&lt;li&gt;Query Frontend&lt;/li&gt;
&lt;li&gt;Querier&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-target=backend&lt;/code&gt; - The backend target is stateful, and is controlled by a Kubernetes StatefulSet. Contains the following components:
&lt;ul&gt;
&lt;li&gt;Compactor&lt;/li&gt;
&lt;li&gt;Index Gateway&lt;/li&gt;
&lt;li&gt;Query Scheduler&lt;/li&gt;
&lt;li&gt;Ruler&lt;/li&gt;
&lt;li&gt;Bloom Planner (experimental)&lt;/li&gt;
&lt;li&gt;Bloom Builder (experimental)&lt;/li&gt;
&lt;li&gt;Bloom Gateway (experimental)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The simple scalable deployment mode requires a reverse proxy to be deployed in front of Loki, to direct client API requests to either the read or write nodes. The Loki Helm chart includes a default reverse proxy configuration, using Nginx.&lt;/p&gt;
&lt;h2 id=&#34;microservices-mode&#34;&gt;Microservices mode&lt;/h2&gt;
&lt;p&gt;The microservices deployment mode runs components of Loki as distinct processes. The microservices deployment is also referred to as a Distributed deployment. Each process is invoked specifying its &lt;code&gt;target&lt;/code&gt;.
For release 3.3 the components are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Bloom Builder (experimental)&lt;/li&gt;
&lt;li&gt;Bloom Gateway (experimental)&lt;/li&gt;
&lt;li&gt;Bloom Planner (experimental)&lt;/li&gt;
&lt;li&gt;Compactor&lt;/li&gt;
&lt;li&gt;Distributor&lt;/li&gt;
&lt;li&gt;Index Gateway&lt;/li&gt;
&lt;li&gt;Ingester&lt;/li&gt;
&lt;li&gt;Overrides Exporter&lt;/li&gt;
&lt;li&gt;Querier&lt;/li&gt;
&lt;li&gt;Query Frontend&lt;/li&gt;
&lt;li&gt;Query Scheduler&lt;/li&gt;
&lt;li&gt;Ruler&lt;/li&gt;
&lt;li&gt;Table Manager (deprecated)&lt;/li&gt;
&lt;/ul&gt;


&lt;div class=&#34;admonition admonition-tip&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Tip&lt;/p&gt;&lt;p&gt;You can see the complete list of targets for your version of Loki by running Loki with the flag &lt;code&gt;-list-targets&lt;/code&gt;, for example:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;Bash&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-bash&#34;&gt;docker run docker.io/grafana/loki:3.7.0 -config.file=/etc/loki/local-config.yaml -list-targets&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;../microservices-mode.png&#34;
  alt=&#34;Microservices mode diagram&#34;/&gt;&lt;/p&gt;
&lt;p&gt;Running components as individual microservices provides more granularity, letting you scale each component as individual microservices, to better match your specific use case.&lt;/p&gt;
&lt;p&gt;Microservices mode deployments can be more efficient Loki installations. However, they are also the most complex to set up and maintain.&lt;/p&gt;
&lt;p&gt;Microservices mode is only recommended for very large Loki clusters or for operators who need more precise control over scaling and cluster operations.&lt;/p&gt;
&lt;p&gt;Microservices mode is designed for Kubernetes deployments.
A &lt;a href=&#34;https://github.com/grafana/helm-charts/tree/main/charts/loki-distributed&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;community-supported Helm chart&lt;/a&gt; is available for deploying Loki in microservices mode.&lt;/p&gt;
]]></content><description>&lt;h1 id="loki-deployment-modes">Loki deployment modes&lt;/h1>
&lt;p>Loki is a distributed system consisting of many microservices. It also has a unique build model where all of those microservices exist within the same binary.&lt;/p></description></item><item><title>Understand labels</title><link>https://grafana.com/docs/loki/v3.7.x/get-started/labels/</link><pubDate>Thu, 09 Apr 2026 02:28:18 +0000</pubDate><guid>https://grafana.com/docs/loki/v3.7.x/get-started/labels/</guid><content><![CDATA[&lt;h1 id=&#34;understand-labels&#34;&gt;Understand labels&lt;/h1&gt;
&lt;p&gt;Labels are a crucial part of Loki. They allow Loki to organize and group together log messages into log streams. Each log stream must have at least one label to be stored and queried in Loki.&lt;/p&gt;
&lt;p&gt;In this topic we&amp;rsquo;ll learn about labels and why your choice of labels is important when shipping logs to Loki.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Labels are intended to store 
    &lt;a href=&#34;/docs/loki/v3.7.x/get-started/labels/cardinality/&#34;&gt;low-cardinality&lt;/a&gt; values that describe the source of your logs. If you frequently search high-cardinality data in your logs, you should use 
    &lt;a href=&#34;/docs/loki/v3.7.x/get-started/labels/structured-metadata/&#34;&gt;structured metadata&lt;/a&gt;.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;h2 id=&#34;understand-labels-1&#34;&gt;Understand labels&lt;/h2&gt;
&lt;p&gt;In Loki, the content of each log line is not indexed. Instead, log entries are grouped into streams which are indexed with labels.&lt;/p&gt;
&lt;p&gt;A label is a key-value pair, for example all of the following are labels:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;deployment_environment = development&lt;/li&gt;
&lt;li&gt;cloud_region = us-west-1&lt;/li&gt;
&lt;li&gt;namespace = grafana-server&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A set of log messages which shares all the labels above would be called a log stream. When Loki performs searches, it first looks for all messages in your chosen stream, and then iterates through the logs in the stream to perform your query.&lt;/p&gt;
&lt;p&gt;Labeling will affect your queries, which in turn will affect your dashboards.
It’s worth spending the time to think about your labeling strategy before you begin ingesting logs to Loki.&lt;/p&gt;
&lt;h2 id=&#34;default-labels-for-all-users&#34;&gt;Default labels for all users&lt;/h2&gt;
&lt;p&gt;Loki does not parse or process your log messages on ingestion. However, depending on which client you use to collect logs, you may have some labels automatically applied to your logs.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;service_name&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Loki automatically tries to populate a default &lt;code&gt;service_name&lt;/code&gt; label while ingesting logs. The service name label is used to find and explore logs in the following Grafana and Grafana Cloud features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Logs Drilldown&lt;/li&gt;
&lt;li&gt;Grafana Cloud Application Observability&lt;/li&gt;
&lt;/ul&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;If you are already applying a &lt;code&gt;service_name&lt;/code&gt;, Loki will use that value. For example, if you are using the Kubernetes monitoring Helm Chart, the Alloy configuration applies a &lt;code&gt;service_name&lt;/code&gt; by default.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;Loki will attempt to create the &lt;code&gt;service_name&lt;/code&gt; label by looking for the following labels in this order:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;service_name&lt;/li&gt;
&lt;li&gt;service&lt;/li&gt;
&lt;li&gt;app&lt;/li&gt;
&lt;li&gt;application&lt;/li&gt;
&lt;li&gt;name&lt;/li&gt;
&lt;li&gt;app_kubernetes_io_name&lt;/li&gt;
&lt;li&gt;container&lt;/li&gt;
&lt;li&gt;container_name&lt;/li&gt;
&lt;li&gt;component&lt;/li&gt;
&lt;li&gt;workload&lt;/li&gt;
&lt;li&gt;job&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If no label is found matching the list, a value of &lt;code&gt;unknown_service&lt;/code&gt; is applied.&lt;/p&gt;
&lt;p&gt;You can change this list by providing a list of labels to &lt;code&gt;discover_service_name&lt;/code&gt; in the 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#limits_config&#34;&gt;limits_config&lt;/a&gt; block.  If you are using Grafana Cloud, contact support to configure this setting.&lt;/p&gt;
&lt;h2 id=&#34;default-labels-for-opentelemetry&#34;&gt;Default labels for OpenTelemetry&lt;/h2&gt;
&lt;p&gt;If you are using either Grafana Alloy or the OpenTelemetry Collector as your Loki client, then Loki automatically assigns some of the OTel resource attributes as labels. Resource attributes map well to index labels in Loki, since both usually identify the source of the logs.&lt;/p&gt;
&lt;p&gt;By default, the following resource attributes will be stored as labels, with periods (&lt;code&gt;.&lt;/code&gt;) replaced with underscores (&lt;code&gt;_&lt;/code&gt;), while the remaining attributes are stored as 
    &lt;a href=&#34;/docs/loki/v3.7.x/get-started/labels/structured-metadata/&#34;&gt;structured metadata&lt;/a&gt; with each log entry:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;cloud.availability_zone&lt;/li&gt;
&lt;li&gt;cloud.region&lt;/li&gt;
&lt;li&gt;container.name&lt;/li&gt;
&lt;li&gt;deployment.environment.name&lt;/li&gt;
&lt;li&gt;k8s.cluster.name&lt;/li&gt;
&lt;li&gt;k8s.container.name&lt;/li&gt;
&lt;li&gt;k8s.cronjob.name&lt;/li&gt;
&lt;li&gt;k8s.daemonset.name&lt;/li&gt;
&lt;li&gt;k8s.deployment.name&lt;/li&gt;
&lt;li&gt;k8s.job.name&lt;/li&gt;
&lt;li&gt;k8s.namespace.name&lt;/li&gt;
&lt;li&gt;k8s.pod.name&lt;/li&gt;
&lt;li&gt;k8s.replicaset.name&lt;/li&gt;
&lt;li&gt;k8s.statefulset.name&lt;/li&gt;
&lt;li&gt;service.instance.id&lt;/li&gt;
&lt;li&gt;service.name&lt;/li&gt;
&lt;li&gt;service.namespace&lt;/li&gt;
&lt;/ul&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Because Loki has a default limit of 15 index labels, we recommend storing only select resource attributes as labels. Although the default config selects more than 15 Resource Attributes, some are mutually exclusive.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;



&lt;div class=&#34;admonition admonition-tip&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Tip&lt;/p&gt;&lt;p&gt;For Grafana Cloud Logs, see the &lt;a href=&#34;/docs/grafana-cloud/send-data/otlp/otlp-format-considerations/#logs&#34;&gt;current OpenTelemetry guidance&lt;/a&gt;. The Faro specific attributes &lt;code&gt;app_id&lt;/code&gt;, &lt;code&gt;kind&lt;/code&gt;, and &lt;code&gt;app_key&lt;/code&gt; are promoted to labels for Grafana Cloud Logs but not Loki.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;The default list of resource attributes to store as labels can be configured using &lt;code&gt;default_resource_attributes_as_index_labels&lt;/code&gt; under the 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#distributor&#34;&gt;distributor&amp;rsquo;s otlp_config&lt;/a&gt;. You can set global limits using 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#limits_config&#34;&gt;limits_config.otlp_config&lt;/a&gt;. If you are using Grafana Cloud, contact support to configure this setting.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-caution&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Caution&lt;/p&gt;&lt;p&gt;Because of the potential for high 
    &lt;a href=&#34;/docs/loki/v3.7.x/get-started/labels/cardinality/&#34;&gt;cardinality&lt;/a&gt;, &lt;code&gt;k8s.pod.name&lt;/code&gt; and &lt;code&gt;service.instance.id&lt;/code&gt; are no longer recommended as default labels. But because removing these resource attributes from the default labels would be a breaking change for existing users, they have not yet been deprecated as default labels. If you are a new user of Grafana Loki, we recommend that you modify your Alloy or OpenTelemetry Collector configuration to convert &lt;code&gt;k8s.pod.name&lt;/code&gt; and &lt;code&gt;service.instance.id&lt;/code&gt; from index labels to structured metadata.
For sample configurations, refer to 
    &lt;a href=&#34;/docs/loki/v3.7.x/get-started/labels/modify-default-labels/&#34;&gt;Modify default labels&lt;/a&gt;.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;h2 id=&#34;labeling-is-iterative&#34;&gt;Labeling is iterative&lt;/h2&gt;
&lt;p&gt;You want to start with a small set of labels. While accepting the default labels assigned by Grafana Alloy or the OpenTelemetry Collector or the Kubernetes Monitoring Helm chart may meet your needs, over time you may find that you need to modify your labeling strategy.&lt;/p&gt;
&lt;p&gt;Once you understand how your first set of labels works and you understand how to apply and query with those labels, you may find that they don’t meet your query patterns.  You may need to modify or change your labels and test your queries again.&lt;/p&gt;
&lt;p&gt;Settling on the right labels for your business needs may require multiple rounds of testing. This should be expected as you continue to tune your Loki environment to meet your business requirements.&lt;/p&gt;
&lt;h2 id=&#34;create-low-cardinality-labels&#34;&gt;Create low cardinality labels&lt;/h2&gt;
&lt;p&gt;
    &lt;a href=&#34;/docs/loki/v3.7.x/get-started/labels/cardinality/&#34;&gt;Cardinality&lt;/a&gt; refers to the combination of unique labels and values which impacts the number of log streams you create.  High cardinality causes Loki to build a huge index and to flush thousands of tiny chunks to the object store. Loki performs very poorly when your labels have high cardinality. If not accounted for, high cardinality will significantly reduce the performance and cost-effectiveness of Loki.&lt;/p&gt;
&lt;p&gt;High cardinality can result from using labels with an unbounded or large set of possible values, such as timestamp or ip_address &lt;strong&gt;or&lt;/strong&gt; applying too many labels, even if they have a small and finite set of values.&lt;/p&gt;
&lt;p&gt;High cardinality can lead to significant performance degradation. Prefer fewer labels, which have bounded values.&lt;/p&gt;
&lt;h2 id=&#34;creating-custom-labels&#34;&gt;Creating custom labels&lt;/h2&gt;


&lt;div class=&#34;admonition admonition-tip&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Tip&lt;/p&gt;&lt;p&gt;Many log collectors such as Grafana Alloy, or the Kubernetes Monitoring Helm chart, will automatically assign appropriate labels for you, so you don&amp;rsquo;t need to create your own labeling strategy.  For most use cases, you can just accept the default labels.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;Usually, labels describe the source of the log, for example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;the namespace or additional logical grouping of applications&lt;/li&gt;
&lt;li&gt;cluster, and/or region of where the logs were produced&lt;/li&gt;
&lt;li&gt;the filename of the source log file on disk&lt;/li&gt;
&lt;li&gt;the hostname where the log was produced, if the environment has individually named machines or virtual machines.  If you have an environment with ephemeral machines or virtual machines, the hostname should be stored in 
    &lt;a href=&#34;/docs/loki/v3.7.x/get-started/labels/structured-metadata/&#34;&gt;structured metadata&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If your logs had the example labels above, then you might query them in LogQL like this:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;{namespace=&amp;quot;mynamespace&amp;quot;, cluster=&amp;quot;cluster123&amp;quot; filename=&amp;quot;/var/log/myapp.log&amp;quot;}&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Unlike index-based log aggregators, Loki doesn&amp;rsquo;t require you to create a label for every field that you might wish to search in your log content. Labels are only needed to organize and identify your log streams. Loki performs searches by iterating over a log stream in a highly parallelized fashion to look for a given string.&lt;/p&gt;
&lt;p&gt;For more information on how Loki performs searches, see the 
    &lt;a href=&#34;/docs/loki/v3.7.x/query/&#34;&gt;Query section&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This means that you don&amp;rsquo;t need to add labels for things inside the log message, such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;log level&lt;/li&gt;
&lt;li&gt;log message&lt;/li&gt;
&lt;li&gt;exception name&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That being said, in some cases you may wish to add some extra labels, which can help to narrow down your log streams even further. When adding custom labels, follow these principles:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;DO use fewer labels, aim to have 10 - 15  labels at a maximum. Fewer labels means a smaller index, which leads to better performance.&lt;/li&gt;
&lt;li&gt;DO be as specific with your labels you can be, the less searching that Loki has to do, the faster your result is returned.&lt;/li&gt;
&lt;li&gt;DO create labels with long-lived values, not unbounded values. To be a good label, we want something that has a stable set of values over time &amp;ndash; even if there are a lot of them.  If just one label value changes, this creates a new stream.&lt;/li&gt;
&lt;li&gt;DO create labels based on terms that your users will actually be querying on.&lt;/li&gt;
&lt;li&gt;DON&amp;rsquo;T create labels for very specific searches (for example, user ID or customer ID) or seldom used searches (searches performed maybe once a year).&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;label-format&#34;&gt;Label format&lt;/h3&gt;
&lt;p&gt;Loki places the same restrictions on label naming as &lt;a href=&#34;https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Prometheus&lt;/a&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It may contain ASCII letters and digits, as well as underscores and colons. It must match the regex &lt;code&gt;[a-zA-Z_:][a-zA-Z0-9_:]*&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Unsupported characters in the label should be converted to an underscore. For example, the label &lt;code&gt;app.kubernetes.io/name&lt;/code&gt; should be written as &lt;code&gt;app_kubernetes_io_name&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;However, do not begin and end your label names with double underscores, as this naming convention is used for internal labels, for example, _&lt;em&gt;stream_shard&lt;/em&gt;_, that are hidden by default in the label browser, query builder, and autocomplete to avoid creating confusion for users.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In Loki, you do not need to add labels based on the content of the log message.&lt;/p&gt;
&lt;h3 id=&#34;labels-and-ingestion-order&#34;&gt;Labels and ingestion order&lt;/h3&gt;
&lt;p&gt;Loki supports ingesting out-of-order log entries. Out-of-order writes are enabled globally by default, but can be disabled/enabled on a cluster or per-tenant basis.  If you plan to ingest out-of-order log entries, your label selection is important.  We recommend trying to find a way to use labels to separate the streams so they can be ingested separately.&lt;/p&gt;
&lt;p&gt;Entries in a given log stream (identified by a given set of label names &amp;amp; values) must be ingested in order, within the default two hour time window. If you try to send entries that are too old for a given log stream, Loki will respond with the error too far behind.&lt;/p&gt;
&lt;p&gt;For systems with different ingestion delays and shipping, use labels to create separate streams. Instead of:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;{environment=&amp;quot;production&amp;quot;}&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;You may separate the log stream into:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;{environment=&amp;quot;production&amp;quot;, app=&amp;quot;slow_app&amp;quot;}&lt;/code&gt;
&lt;code&gt;{environment=&amp;quot;production&amp;quot;, app=&amp;quot;fast_app&amp;quot;}&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Now the &amp;ldquo;fast_app&amp;rdquo; and &amp;ldquo;slow_app&amp;rdquo; will ship logs to different streams, allowing each to maintain their order of ingestion.&lt;/p&gt;
&lt;h2 id=&#34;loki-labels-examples&#34;&gt;Loki labels examples&lt;/h2&gt;
&lt;p&gt;The way that labels are added to logs is configured in the client that you use to send logs to Loki.  The specific configuration will be different for each client.&lt;/p&gt;
&lt;h3 id=&#34;alloy-example&#34;&gt;Alloy example&lt;/h3&gt;
&lt;p&gt;Grafana Labs recommends using Grafana Alloy to send logs to Loki.  Here is an example configuration:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;Alloy&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-alloy&#34;&gt;
local.file_match &amp;#34;tmplogs&amp;#34; {
    path_targets = [{&amp;#34;__path__&amp;#34; = &amp;#34;/tmp/alloy-logs/*.log&amp;#34;}]
}

loki.source.file &amp;#34;local_files&amp;#34; {
    targets    = local.file_match.tmplogs.targets
    forward_to = [loki.process.add_new_label.receiver]
}

loki.process &amp;#34;add_new_label&amp;#34; {
    // Extract the value of &amp;#34;level&amp;#34; from the log line and add it to the extracted map as &amp;#34;extracted_level&amp;#34;
    // You could also use &amp;#34;level&amp;#34; = &amp;#34;&amp;#34;, which would extract the value of &amp;#34;level&amp;#34; and add it to the extracted map as &amp;#34;level&amp;#34;
    // but to make it explicit for this example, we will use a different name.
    //
    // The extracted map will be covered in more detail in the next section.
    stage.logfmt {
        mapping = {
            &amp;#34;extracted_level&amp;#34; = &amp;#34;level&amp;#34;,
        }
    }

    // Add the value of &amp;#34;extracted_level&amp;#34; from the extracted map as a &amp;#34;level&amp;#34; label
    stage.labels {
        values = {
            &amp;#34;level&amp;#34; = &amp;#34;extracted_level&amp;#34;,
        }
    }

    forward_to = [loki.relabel.add_static_label.receiver]
}

loki.relabel &amp;#34;add_static_label&amp;#34; {
    forward_to = [loki.write.local_loki.receiver]

    rule {
        target_label = &amp;#34;os&amp;#34;
        replacement  = constants.os
    }
}

loki.write &amp;#34;local_loki&amp;#34; {
    endpoint {
        url = &amp;#34;http://localhost:3100/loki/api/v1/push&amp;#34;
    }
}&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h3 id=&#34;cardinality-examples&#34;&gt;Cardinality examples&lt;/h3&gt;
&lt;p&gt;The two previous examples use statically defined labels with a single value; however, there are ways to dynamically define labels. Let&amp;rsquo;s take a look using the Apache log and a massive regex you could use to parse such a log line:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;nohighlight&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-nohighlight&#34;&gt;11.11.11.11 - frank [25/Jan/2000:14:00:01 -0500] &amp;#34;GET /1986.js HTTP/1.1&amp;#34; 200 932 &amp;#34;-&amp;#34; &amp;#34;Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 GTB6&amp;#34;&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;- job_name: system
  pipeline_stages:
    - regex:
        expression: &amp;#34;^(?P&amp;lt;ip&amp;gt;\\S&amp;#43;) (?P&amp;lt;identd&amp;gt;\\S&amp;#43;) (?P&amp;lt;user&amp;gt;\\S&amp;#43;) \\[(?P&amp;lt;timestamp&amp;gt;[\\w:/]&amp;#43;\\s[&amp;#43;\\-]\\d{4})\\] \&amp;#34;(?P&amp;lt;action&amp;gt;\\S&amp;#43;)\\s?(?P&amp;lt;path&amp;gt;\\S&amp;#43;)?\\s?(?P&amp;lt;protocol&amp;gt;\\S&amp;#43;)?\&amp;#34; (?P&amp;lt;status_code&amp;gt;\\d{3}|-) (?P&amp;lt;size&amp;gt;\\d&amp;#43;|-)\\s?\&amp;#34;?(?P&amp;lt;referer&amp;gt;[^\&amp;#34;]*)\&amp;#34;?\\s?\&amp;#34;?(?P&amp;lt;useragent&amp;gt;[^\&amp;#34;]*)?\&amp;#34;?$&amp;#34;
    - labels:
        action:
        status_code:
  static_configs:
    - targets:
        - localhost
      labels:
        job: apache
        env: dev
        __path__: /var/log/apache.log&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This regex matches every component of the log line and extracts the value of each component into a capture group. Inside the pipeline code, this data is placed in a temporary data structure that allows use for several purposes during the processing of that log line (at which point that temp data is discarded). Much more detail about this can be found in the Alloy &lt;a href=&#34;/docs/alloy/latest/reference/components/loki/loki.process/&#34;&gt;&lt;code&gt;loki.process&lt;/code&gt;&lt;/a&gt; documentation.&lt;/p&gt;
&lt;p&gt;From that regex, we will be using two of the capture groups to dynamically set two labels based on content from the log line itself:&lt;/p&gt;
&lt;p&gt;action (for example, &lt;code&gt;action=&amp;quot;GET&amp;quot;&lt;/code&gt;, &lt;code&gt;action=&amp;quot;POST&amp;quot;&lt;/code&gt;)&lt;/p&gt;
&lt;p&gt;status_code (for example, &lt;code&gt;status_code=&amp;quot;200&amp;quot;&lt;/code&gt;, &lt;code&gt;status_code=&amp;quot;400&amp;quot;&lt;/code&gt;)&lt;/p&gt;
&lt;p&gt;And now let&amp;rsquo;s walk through a few example lines:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;nohighlight&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-nohighlight&#34;&gt;11.11.11.11 - frank [25/Jan/2000:14:00:01 -0500] &amp;#34;GET /1986.js HTTP/1.1&amp;#34; 200 932 &amp;#34;-&amp;#34; &amp;#34;Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 GTB6&amp;#34;
11.11.11.12 - frank [25/Jan/2000:14:00:02 -0500] &amp;#34;POST /1986.js HTTP/1.1&amp;#34; 200 932 &amp;#34;-&amp;#34; &amp;#34;Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 GTB6&amp;#34;
11.11.11.13 - frank [25/Jan/2000:14:00:03 -0500] &amp;#34;GET /1986.js HTTP/1.1&amp;#34; 400 932 &amp;#34;-&amp;#34; &amp;#34;Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 GTB6&amp;#34;
11.11.11.14 - frank [25/Jan/2000:14:00:04 -0500] &amp;#34;POST /1986.js HTTP/1.1&amp;#34; 400 932 &amp;#34;-&amp;#34; &amp;#34;Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 GTB6&amp;#34;&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In Loki the following streams would be created:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;nohighlight&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-nohighlight&#34;&gt;{job=&amp;#34;apache&amp;#34;,env=&amp;#34;dev&amp;#34;,action=&amp;#34;GET&amp;#34;,status_code=&amp;#34;200&amp;#34;} 11.11.11.11 - frank [25/Jan/2000:14:00:01 -0500] &amp;#34;GET /1986.js HTTP/1.1&amp;#34; 200 932 &amp;#34;-&amp;#34; &amp;#34;Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 GTB6&amp;#34;
{job=&amp;#34;apache&amp;#34;,env=&amp;#34;dev&amp;#34;,action=&amp;#34;POST&amp;#34;,status_code=&amp;#34;200&amp;#34;} 11.11.11.12 - frank [25/Jan/2000:14:00:02 -0500] &amp;#34;POST /1986.js HTTP/1.1&amp;#34; 200 932 &amp;#34;-&amp;#34; &amp;#34;Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 GTB6&amp;#34;
{job=&amp;#34;apache&amp;#34;,env=&amp;#34;dev&amp;#34;,action=&amp;#34;GET&amp;#34;,status_code=&amp;#34;400&amp;#34;} 11.11.11.13 - frank [25/Jan/2000:14:00:03 -0500] &amp;#34;GET /1986.js HTTP/1.1&amp;#34; 400 932 &amp;#34;-&amp;#34; &amp;#34;Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 GTB6&amp;#34;
{job=&amp;#34;apache&amp;#34;,env=&amp;#34;dev&amp;#34;,action=&amp;#34;POST&amp;#34;,status_code=&amp;#34;400&amp;#34;} 11.11.11.14 - frank [25/Jan/2000:14:00:04 -0500] &amp;#34;POST /1986.js HTTP/1.1&amp;#34; 400 932 &amp;#34;-&amp;#34; &amp;#34;Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 GTB6&amp;#34;&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Those four log lines would become four separate streams and start filling four separate chunks.&lt;/p&gt;
&lt;p&gt;Any additional log lines that match those combinations of labels/values would be added to the existing stream. If another unique combination of labels comes in (for example, &lt;code&gt;status_code=&amp;quot;500&amp;quot;&lt;/code&gt;) another new stream is created.&lt;/p&gt;
&lt;p&gt;Imagine now if you set a label for &lt;code&gt;ip&lt;/code&gt;. Not only does every request from a user become a unique stream. Every request with a different action or status_code from the same user will get its own stream.&lt;/p&gt;
&lt;p&gt;Doing some quick math, if there are maybe four common actions (GET, PUT, POST, DELETE) and maybe four common status codes (although there could be more than four!), this would be 16 streams and 16 separate chunks. Now multiply this by every user if we use a label for &lt;code&gt;ip&lt;/code&gt;.  You can quickly have thousands or tens of thousands of streams.&lt;/p&gt;
]]></content><description>&lt;h1 id="understand-labels">Understand labels&lt;/h1>
&lt;p>Labels are a crucial part of Loki. They allow Loki to organize and group together log messages into log streams. Each log stream must have at least one label to be stored and queried in Loki.&lt;/p></description></item><item><title>Consistent hash rings</title><link>https://grafana.com/docs/loki/v3.7.x/get-started/hash-rings/</link><pubDate>Thu, 09 Apr 2026 02:28:18 +0000</pubDate><guid>https://grafana.com/docs/loki/v3.7.x/get-started/hash-rings/</guid><content><![CDATA[&lt;h1 id=&#34;consistent-hash-rings&#34;&gt;Consistent hash rings&lt;/h1&gt;
&lt;p&gt;&lt;a href=&#34;https://en.wikipedia.org/wiki/Consistent_hashing&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Consistent hash rings&lt;/a&gt;
are incorporated into Loki cluster architectures to&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;aid in the sharding of log lines&lt;/li&gt;
&lt;li&gt;implement high availability&lt;/li&gt;
&lt;li&gt;ease the horizontal scale up and scale down of clusters.
There is less of a performance hit for operations that must rebalance data.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Hash rings connect instances of a single type of component when&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;there are a set of Loki instances in monolithic deployment mode&lt;/li&gt;
&lt;li&gt;there are multiple read components or multiple write components in
simple scalable deployment mode&lt;/li&gt;
&lt;li&gt;there are multiple instances of one type of component in microservices mode&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Not all Loki components are connected by hash rings.
These components need to be connected into a hash ring:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;distributors&lt;/li&gt;
&lt;li&gt;ingesters&lt;/li&gt;
&lt;li&gt;query schedulers&lt;/li&gt;
&lt;li&gt;compactors&lt;/li&gt;
&lt;li&gt;rulers&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These components can optionally be connected into a hash ring:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;index gateway&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In an architecture that has three distributors and three ingesters defined,
the hash rings for these components connect the instances of same-type components.&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;../ring-overview.png&#34;
  alt=&#34;Distributor and ingester rings&#34;/&gt;&lt;/p&gt;
&lt;p&gt;Each node in the ring represents an instance of a component.
Each node has a key-value store that holds communication information
for each of the nodes in that ring.
Nodes update the key-value store periodically to keep the contents consistent
across all nodes.
For each node, the key-value store holds:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;an ID of the component node&lt;/li&gt;
&lt;li&gt;component address, used by other nodes as a communication channel&lt;/li&gt;
&lt;li&gt;an indication of the component node&amp;rsquo;s health&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;configuring-rings&#34;&gt;Configuring rings&lt;/h2&gt;
&lt;p&gt;Define 
    &lt;a href=&#34;/docs/loki/v3.7.x/configure/#common&#34;&gt;ring configuration&lt;/a&gt; within the &lt;code&gt;common.ring&lt;/code&gt; block.&lt;/p&gt;
&lt;p&gt;Use the &lt;code&gt;memberlist&lt;/code&gt; key-value store type unless there is
a compelling reason to use a different key-value store type.
&lt;code&gt;memberlist&lt;/code&gt; uses a &lt;a href=&#34;https://en.wikipedia.org/wiki/Gossip_protocol&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;gossip protocol&lt;/a&gt;
to propagate information to all the nodes
to guarantee the eventual consistency of the key-value store contents.&lt;/p&gt;
&lt;p&gt;There are additional configuration options for distributor rings,
ingester rings, and ruler rings.
These options are for advanced, specialized use only.
These options are defined within the &lt;code&gt;distributor.ring&lt;/code&gt; block for distributors,
the &lt;code&gt;ingester.lifecycler.ring&lt;/code&gt; block for ingesters,
and the &lt;code&gt;ruler.ring&lt;/code&gt; block for rulers.&lt;/p&gt;
&lt;h2 id=&#34;about-the-distributor-ring&#34;&gt;About the distributor ring&lt;/h2&gt;
&lt;p&gt;Distributors use the information in their key-value store
to keep a count of the quantity of distributors in the distributor ring.
The count further informs cluster limits.&lt;/p&gt;
&lt;h2 id=&#34;about-the-ingester-ring&#34;&gt;About the ingester ring&lt;/h2&gt;
&lt;p&gt;Ingester ring information in the key-value stores is used by distributors.
The information lets the distributors shard log lines,
determining which ingester or set of ingesters a distributor sends log lines to.&lt;/p&gt;
&lt;h2 id=&#34;about-the-query-scheduler-ring&#34;&gt;About the query scheduler ring&lt;/h2&gt;
&lt;p&gt;Query schedulers use the information in their key-value store
for service discovery of the schedulers.
This allows queriers to connect to all available schedulers,
and it allows schedulers to connect to all available query frontends,
effectively creating a single queue that aids in balancing the query load.&lt;/p&gt;
&lt;h2 id=&#34;about-the-compactor-ring&#34;&gt;About the compactor ring&lt;/h2&gt;
&lt;p&gt;Compactors use the information in the key-value store to identify
a single compactor instance that will be responsible for compaction.
The compactor is only enabled on the responsible instance,
despite the compactor target being on multiple instances.&lt;/p&gt;
&lt;h2 id=&#34;about-the-ruler-ring&#34;&gt;About the ruler ring&lt;/h2&gt;
&lt;p&gt;The ruler ring is used to determine which rulers evaluate which rule groups.&lt;/p&gt;
&lt;h2 id=&#34;about-the-index-gateway-ring&#34;&gt;About the index gateway ring&lt;/h2&gt;
&lt;p&gt;The index gateway ring is used to determine which gateway is responsible for which tenant&amp;rsquo;s indexes when queried by rulers or queriers.&lt;/p&gt;
]]></content><description>&lt;h1 id="consistent-hash-rings">Consistent hash rings&lt;/h1>
&lt;p>&lt;a href="https://en.wikipedia.org/wiki/Consistent_hashing" target="_blank" rel="noopener noreferrer">Consistent hash rings&lt;/a>
are incorporated into Loki cluster architectures to&lt;/p>
&lt;ul>
&lt;li>aid in the sharding of log lines&lt;/li>
&lt;li>implement high availability&lt;/li>
&lt;li>ease the horizontal scale up and scale down of clusters.
There is less of a performance hit for operations that must rebalance data.&lt;/li>
&lt;/ul>
&lt;p>Hash rings connect instances of a single type of component when&lt;/p></description></item></channel></rss>