This is documentation for the next version of Loki. For the latest stable release, go to the latest version.

Open source RSS

Fluent Bit client

Fluent Bit is a fast and lightweight logs and metrics processor and forwarder that can be configured with the Grafana Fluent Bit Plugin described here or with the Fluent-bit Loki output plugin to ship logs to Loki. This plugin has more configuration options compared to the built-in Fluent Bit Loki plugin. You can define which log files you want to collect using the Tail or Stdin data pipeline input. Additionally, Fluent Bit supports multiple Filter and Parser plugins (Kubernetes, JSON, etc.) to structure and alter log lines.



You can run a Fluent Bit container with Loki output plugin pre-installed using our Docker Hub image:

docker run -v /var/log:/var/log \
    -e LOG_PATH="/var/log/*.log" -e LOKI_URL="http://localhost:3100/loki/api/v1/push" \

Or, an alternative is to run the fluent-bit container using Docker Hub image:

Docker Container Logs

To ship logs from Docker containers to Grafana Cloud using Fluent Bit, you can use the Fluent Bit Docker image and configure it to forward logs directly to Grafana Cloud’s Loki. Below is a step-by-step guide on setting up Fluent Bit for this purpose.


  • Docker is installed on your machine.
  • You have a Grafana Cloud account with access to Loki.


  1. Create a Fluent Bit configuration file named fluent-bit.conf with the following content, which defines the input from Docker container logs and sets up the output to send logs to your Grafana Cloud Loki instance:

        Flush        1
        Log_Level    info
        Name     tail
        Path     /var/lib/docker/containers/*/*.log
        Parser   docker
        Tag      docker.*
        Name         loki
        Match        *
        Port         443
        TLS          On
        TLS.Verify   On
        HTTP_User    478625
        Labels       job=fluentbit


You can run Fluent Bit as a Daemonset to collect all your Kubernetes workload logs.

To do so you can use the Fluent Bit helm chart with the following values.yaml changing the value of FLUENT_LOKI_URL:

  # Here we use the Docker image which has the plugin installed
  repository: grafana/fluent-bit-plugin-loki
  tag: main-e2ed1c0

  - "-e"
  - "/fluent-bit/bin/"
  - --workdir=/fluent-bit/etc
  - --config=/fluent-bit/etc/conf/fluent-bit.conf

  # Note that for security reasons you should fetch the credentials through a Kubernetes Secret . You may use the envFrom for this.
    value: https://user:pass@your-loki.endpoint/loki/api/v1/push

  inputs: |
        Name tail
        Tag kube.*
        Path /var/log/containers/*.log
        # Be aware that local clusters like docker-desktop or kind use the docker log format and not the cri (
        multiline.parser docker, cri
        Mem_Buf_Limit 5MB
        Skip_Long_Lines On

  outputs: |
        Name grafana-loki
        Match kube.*
        Url ${FLUENT_LOKI_URL}
        Labels {job="fluent-bit"}
        LabelKeys level,app # this sets the values for actual Loki streams and the other labels are converted to structured_metadata<LOKI_VERSION>/get-started/labels/structured-metadata/
        BatchWait 1
        BatchSize 1001024
        LineFormat json
        LogLevel info
        AutoKubernetesLabels true
helm repo add fluent
helm repo update
helm install fluent-bit fluent/fluent-bit -f values.yaml

By default it will collect all containers logs and extract labels from Kubernetes API (container_name, namespace, etc..).

If you also want to host your Loki instance inside the cluster install the official Loki helm chart.

AWS Elastic Container Service (ECS)

You can use fluent-bit Loki Docker image as a Firelens log router in AWS ECS. For more information about this see our AWS documentation


First, you need to follow the instructions in order to build the plugin dynamic library.

The assuming you have Fluent Bit installed in your $PATH you can run the plugin using:

fluent-bit -e /path/to/built/ -c fluent-bit.conf

You can also adapt your plugins.conf, removing the need to change the command line options:

    Path /path/to/built/

Configuration Options

UrlUrl of Loki server API endpoint.http://localhost:3100/loki/api/v1/push
TenantIDThe tenant ID used by default to push logs to Loki. If omitted or empty it assumes Loki is running in single-tenant mode and no X-Scope-OrgID header is sent.""
BatchWaitTime to wait before send a log batch to Loki, full or not.1s
BatchSizeLog batch size to send a log batch to Loki (unit: Bytes).10 KiB (10 * 1024 Bytes)
TimeoutMaximum time to wait for Loki server to respond to a request.10s
MinBackoffInitial backoff time between retries.500ms
MaxBackoffMaximum backoff time between retries.5m
MaxRetriesMaximum number of retries when sending batches. Setting it to 0 will retry indefinitely.10
Labelslabels for API requests.{job=“fluent-bit”}
LogLevelLogLevel for plugin logger.“info”
RemoveKeysSpecify removing keys.none
AutoKubernetesLabelsIf set to true, it will add all Kubernetes labels to Loki labelsfalse
LabelKeysComma separated list of keys to use as stream labels. All other keys will be placed into the log line. LabelKeys is deactivated when using LabelMapPath label mapping configuration.none
LineFormatFormat to use when flattening the record to a log line. Valid values are “json” or “key_value”. If set to “json” the log line sent to Loki will be the fluentd record (excluding any keys extracted out as labels) dumped as json. If set to “key_value”, the log line will be each item in the record concatenated together (separated by a single space) in the format =.json
DropSingleKeyIf set to true and after extracting label_keys a record only has a single key remaining, the log line sent to Loki will just be the value of the record key.true
LabelMapPathPath to a json file defining how to transform nested records.none
BufferEnable buffering mechanismfalse
BufferTypeSpecify the buffering mechanism to use (currently only dque is implemented).dque
DqueDirPath to the directory for queued logs/tmp/flb-storage/loki
DqueSegmentSizeSegment size in terms of number of records per segment500
DqueSyncWhether to fsync each queue change. Specify no fsync with “normal”, and fsync with “full”.“normal”
DqueNameQueue name, must be uniq per outputdque


Labels are used to query logs {container_name="nginx", cluster="us-west1"}, they are usually metadata about the workload producing the log stream (instance, container_name, region, cluster, level). In Loki labels are indexed consequently you should be cautious when choosing them (high cardinality label values can have performance drastic impact).

You can use Labels, RemoveKeys , LabelKeys and LabelMapPath to how the output plugin will perform labels extraction.


If set to true, it will add all Kubernetes labels to Loki labels automatically and ignore parameters LabelKeys, LabelMapPath.


When using the Parser and Filter plugins Fluent Bit can extract and add data to the current record/log data. While Loki labels are key value pair, record data can be nested structures. You can pass a JSON file that defines how to extract labels from each record. Each json key from the file will be matched with the log record to find label values. Values from the configuration are used as label names.

Considering the record below :

  "kubernetes": {
    "container_name": "promtail",
    "pod_name": "promtail-xxx",
    "namespace_name": "prod",
    "labels" : {
        "team": "x-men"
  "HOSTNAME": "docker-desktop",
  "log" : "a log line",
  "time": "20190926T152206Z"

and a LabelMap file as follow :

  "kubernetes": {
    "container_name": "container",
    "pod_name": "pod",
    "namespace_name": "namespace",
    "labels" : {
        "team": "team"

The labels extracted will be {team="x-men", container="promtail", pod="promtail-xxx", namespace="prod"}.

If you don’t want the kubernetes and HOSTNAME fields to appear in the log line you can use the RemoveKeys configuration field. (e.g. RemoveKeys kubernetes,HOSTNAME).


Buffering refers to the ability to store the records somewhere, and while they are processed and delivered, still be able to store more. The Loki output plugin can be blocked by the Loki client because of its design:

  • If the BatchSize is over the limit, the output plugin pauses receiving new records until the pending batch is successfully sent to the server
  • If the Loki server is unreachable (retry 429s, 500s and connection-level errors), the output plugin blocks new records until the Loki server is available again, and the pending batch is successfully sent to the server or as long as the maximum number of attempts has been reached within configured back-off mechanism

The blocking state with some of the input plugins is not acceptable, because it can have an undesirable side effect on the part that generates the logs. Fluent Bit implements a buffering mechanism that is based on parallel processing. Therefore, it cannot send logs in order. There are two ways of handling the out-of-order logs:

  • Configure Loki to accept out-of-order writes.

  • Configure the Loki output plugin to use the buffering mechanism based on dque, which is compatible with the Loki server strict time ordering:

        Name grafana-loki
        Match *
        Url http://localhost:3100/loki/api/v1/push
        Buffer true
        DqueSegmentSize 8096
        DqueDir /tmp/flb-storage/buffer
        DqueName loki.0

Configuration examples

To configure the Loki output plugin add this section to fluent-bit.conf

    Name grafana-loki
    Match *
    Url http://localhost:3100/loki/api/v1/push
    BatchWait 1s
    BatchSize 30720
    # (30KiB)
    Labels {test="fluent-bit-go", lang="Golang"}
    RemoveKeys key1,key2
    LabelKeys key3,key4
    LineFormat key_value
    Name grafana-loki
    Match *
    Url http://localhost:3100/loki/api/v1/push
    BatchWait 1s
    BatchSize 30720 # (30KiB)
    AutoKubernetesLabels true
    RemoveKeys key1,key2

A full example configuration file is also available in the Loki repository.

Running multiple plugin instances

You can run multiple plugin instances in the same fluent-bit process, for example if you want to push to different Loki servers or route logs into different Loki tenant IDs. To do so, add additional [Output] sections.