This is archived documentation for 1.2.0. Go to the latest version.
This document describes known failure modes of Promtail on edge cases and the adopted trade-offs.
Promtail can be configured to print log stream entries instead of sending them to Loki. This can be used in combination with piping data to debug or troubleshoot Promtail log parsing.
In dry run mode, Promtail still support reading from a positions file however no update will be made to the targeted file, this is to ensure you can easily retry the same set of lines.
To start Promtail in dry run mode use the flag
--dry-run as shown in the example below:
cat my.log | promtail --stdin --dry-run --client.url http://127.0.0.1:3100/loki/api/v1/push
Inspecting pipeline stages
Promtail can output all changes to log entries as each pipeline stage is executed. Each log entry contains four fields:
- extracted fields
Enable the inspection output using the
--inspect command-line option. The
--inspect option can be used in combination with
cat my.log | promtail --stdin --dry-run --inspect --client.url http://127.0.0.1:3100/loki/api/v1/push
The output uses color to highlight changes. Additions are in green, modifications in yellow, and removals in red.
If no changes are applied during a stage, that is usually an indication of a misconfiguration or undesired behavior.
--inspect flag should not be used in production, as the calculation of changes between pipeline stages negatively
impacts Promtail’s performance.
Pipe data to Promtail
Promtail supports piping data for sending logs to Loki (via the flag
--stdin). This is a very useful way to troubleshooting your configuration.
Once you have Promtail installed you can for instance use the following command to send logs to a local Loki instance:
cat my.log | promtail --stdin --client.url http://127.0.0.1:3100/loki/api/v1/push
You can also add additional labels from command line using:
cat my.log | promtail --stdin --client.url http://127.0.0.1:3100/loki/api/v1/push --client.external-labels=k1=v1,k2=v2
This will add labels
k2 with respective values
In pipe mode Promtail also support file configuration using
--config.file, however do note that positions config is not used and
only the first scrape config is used.
static_configs: can be used to provide static labels, although the targets property is ignored.
If you don’t provide any
scrape_config: a default one is used which will automatically adds the following default labels:
For example you could use this config below to parse and add the label
level on all your piped logs:
clients: - url: http://localhost:3100/loki/api/v1/push scrape_configs: - job_name: system pipeline_stages: - regex: expression: '(level|lvl|severity)=(?P<level>\\w+)' - labels: level: static_configs: - labels: job: my-stdin-logs
cat my.log | promtail --config.file promtail.yaml
A tailed file is truncated while Promtail is not running
Given the following order of events:
- Promtail is tailing
- Promtail current position for
- Promtail is stopped
/app.logis truncated and new logs are appended to it
- Promtail is restarted
When Promtail is restarted, it reads the previous position (
100) from the
positions file. Two scenarios are then possible:
/app.logsize is less than the position before truncating
/app.logsize is greater than or equal to the position before truncating
/app.log file size is less than the previous position, then the file is
detected as truncated and logs will be tailed starting from position
Otherwise, if the
/app.log file size is greater than or equal to the previous
position, Promtail can’t detect it was truncated while not running and will
continue tailing the file from position
Generally speaking, Promtail uses only the path to the file as key in the positions file. Whenever Promtail is started, for each file path referenced in the positions file, Promtail will read the file from the beginning if the file size is less than the offset stored in the position file, otherwise it will continue from the offset, regardless the file has been truncated or rolled multiple times while Promtail was not running.
Loki is unavailable
For each tailing file, Promtail reads a line, process it through the
pipeline_stages and push the log entry to Loki. Log entries are
batched together before getting pushed to Loki, based on the max batch duration
client.batch-wait and size
client.batch-size-bytes, whichever comes first.
In case of any error while sending a log entries batch, Promtail adopts a “retry then discard” strategy:
- Promtail retries to send log entry to the ingester up to
- If all retries fail, Promtail discards the batch of log entries (which will be lost) and proceeds with the next one
You can configure the
max_retries and the delay between two retries via the
backoff_config in the Promtail config file:
clients: - url: INGESTER-URL backoff_config: min_period: 100ms max_period: 10s max_retries: 10
The following table shows an example of the total delay applied by the backoff algorithm
min_period: 100ms and
|Retry||Min delay||Max delay||Total min delay||Total max delay|
Log entries pushed after a Promtail crash / panic / abruptly termination
When Promtail shuts down gracefully, it saves the last read offsets in the positions file, so that on a subsequent restart it will continue tailing logs without duplicates neither losses.
In the event of a crash or abruptly termination, Promtail can’t save the last read offsets in the positions file. When restarted, Promtail will read the positions file saved at the last sync period and will continue tailing the files from there. This means that if new log entries have been read and pushed to the ingester between the last sync period and the crash, these log entries will be sent again to the ingester on Promtail restart.
If Loki is not configured to accept out-of-order writes, Loki will reject all log lines received in
what it perceives is out of
order. If Promtail happens to
crash, it may re-send log lines that were sent prior to the crash. The default
behavior of Promtail is to assign a timestamp to logs at the time it read the
entry from the tailed file. This would result in duplicate log lines being sent
to Loki; to avoid this issue, if your tailed file has a timestamp embedded in
the log lines, a
timestamp stage should be added to your pipeline.
Related Enterprise Logs resources
Grafana Enterprise Logs: Logging with security and scale
Join us for this webinar, which will cover: Challenges with logging as organizations scale and the volume of logs explodes, how Grafana Enterprise Logs enables organizations to make logs available to any team members who need them, features available in GEL and how to get access, a live product demo so you can see GEL for the first time
VIDEO: Watch this first-look demo of the new Grafana Enterprise Logs
Based on Loki, Grafana Enterprise Logs is part of the Grafana Enterprise Stack for composing and scaling observability on your own infrastructure.
Introducing Grafana Enterprise Logs, a core part of the Grafana Enterprise Stack integrated observability solution
Powered by the Loki open source project, the Enterprise Logs offering joins metrics and dashboards in our enterprise-ready stack for self-managed observability.