---
title: "otelcol.receiver.filelog | Grafana Alloy documentation"
description: "Learn about otelcol.receiver.filelog"
---

# `otelcol.receiver.filelog`

> **Public preview**: This is a [public preview](/docs/release-life-cycle/) component. Public preview components are subject to breaking changes, and may be replaced with equivalent functionality that cover the same use case. To enable and use a public preview component, you must set the `stability.level` [flag](/docs/alloy/latest/reference/cli/run/) to `public-preview` or below.

`otelcol.receiver.filelog` reads log entries from files and forwards them to other `otelcol.*` components.

> Note
> 
> `otelcol.receiver.filelog` is a wrapper over the upstream OpenTelemetry Collector [`filelog`](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.147.0/receiver/filelogreceiver) receiver. Bug reports or feature requests will be redirected to the upstream repository, if necessary.

You can specify multiple `otelcol.receiver.filelog` components by giving them different labels.

## Usage

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
otelcol.receiver.filelog "<LABEL>" {
  include = [...]
  output {
    logs    = [...]
  }
}
```

## Arguments

You can use the following arguments with `otelcol.receiver.filelog`:

Expand table

| Name                            | Type                       | Description                                                                                | Default   | Required |
|---------------------------------|----------------------------|--------------------------------------------------------------------------------------------|-----------|----------|
| `include`                       | `list(string)`             | A list of glob patterns to include files.                                                  |           | yes      |
| `acquire_fs_lock`               | `bool`                     | Whether to acquire a file system lock while reading the file (Unix only).                  | `false`   | no       |
| `attributes`                    | `map(string)`              | A map of attributes to add to each log entry.                                              | `{}`      | no       |
| `compression`                   | `string`                   | The compression type used for the log file.                                                | \`\`      | no       |
| `delete_after_read`             | `bool`                     | Whether to delete the file after reading.                                                  | `false`   | no       |
| `encoding`                      | `string`                   | The encoding of the log file.                                                              | `"utf-8"` | no       |
| `exclude_older_than`            | `duration`                 | Exclude files with a modification time older than the specified duration.                  | `"0s"`    | no       |
| `exclude`                       | `list(string)`             | A list of glob patterns to exclude files that would be included by the `include` patterns. | `[]`      | no       |
| `fingerprint_size`              | `units.Base2Bytes`         | The size of the fingerprint used to detect file changes.                                   | `1KiB`    | no       |
| `force_flush_period`            | `duration`                 | The period after which logs are flushed even if the buffer isn’t full.                     | `"500ms"` | no       |
| `include_file_name_resolved`    | `bool`                     | Whether to include the resolved filename in the log entry.                                 | `false`   | no       |
| `include_file_name`             | `bool`                     | Whether to include the filename in the log entry.                                          | `true`    | no       |
| `include_file_owner_group_name` | `bool`                     | Whether to include the file owner’s group name in the log entry.                           | `false`   | no       |
| `include_file_owner_name`       | `bool`                     | Whether to include the file owner’s name in the log entry.                                 | `false`   | no       |
| `include_file_path_resolved`    | `bool`                     | Whether to include the resolved file path in the log entry.                                | `false`   | no       |
| `include_file_path`             | `bool`                     | Whether to include the file path in the log entry.                                         | `false`   | no       |
| `include_file_record_number`    | `bool`                     | Whether to include the file record number in the log entry.                                | `false`   | no       |
| `max_batches`                   | `int`                      | The maximum number of batches to process concurrently.                                     | `10`      | no       |
| `max_concurrent_files`          | `int`                      | The maximum number of files to read concurrently.                                          | `10`      | no       |
| `max_log_size`                  | `units.Base2Bytes`         | The maximum size of a log entry.                                                           | `1MiB`    | no       |
| `operators`                     | `list(map(string))`        | A list of operators used to parse the log entries.                                         | `[]`      | no       |
| `poll_interval`                 | `duration`                 | The interval at which the file is polled for new entries.                                  | `"200ms"` | no       |
| `preserve_leading_whitespaces`  | `bool`                     | Preserves leading whitespace in messages when set to `true`.                               | `false`   | no       |
| `preserve_trailing_whitespaces` | `bool`                     | Preserves trailing whitespace in messages when set to `true`.                              | `false`   | no       |
| `resource`                      | `map(string)`              | A map of resource attributes to associate with each log entry.                             | `{}`      | no       |
| `start_at`                      | `string`                   | The position to start reading the file from.                                               | `"end"`   | no       |
| `storage`                       | `capsule(otelcol.Handler)` | Handler from an `otelcol.storage` component to use for persisting state.                   |           | no       |

`encoding` must be one of `utf-8`, `utf8-raw`, `utf-16le`, `utf-16be`, `ascii`, `big5`, or `nop`. Refer to the upstream receiver [documentation](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.147.0/receiver/filelogreceiver/README.md#supported-encodings) for more details.

`start_at` must be one of `beginning` or `end`. The `header` block may only be used if `start_at` is `beginning`.

`compression` must be either `""`, `gzip`, or `auto`. `auto` automatically detects file compression type and ingests data. Currently, only gzip compressed files are auto detected. This allows for mix of compressed and uncompressed files to be ingested with the same filelogreceiver.

To persist state between restarts of the Alloy process, set the `storage` attribute to the `handler` exported from an `otelcol.storage.*` component.

### `operators`

The `operators` list is a list of stanza [operators](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.147.0/pkg/stanza/docs/operators/README.md#what-operators-are-available) that transform the log entries after they have been read.

For example, if container logs are being collected you may want to utilize the stanza `container` parser operator to add relevant attributes to the log entries.

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
otelcol.receiver.filelog "default" {
    ...
    operators = [
      {
        type = "container"
      }
    ]
}
```

## Blocks

You can use the following blocks with `otelcol.receiver.filelog`:

No valid configuration blocks found.

### `output`

Required

The `output` block configures a set of components to forward resulting telemetry data to.

The following arguments are supported:

Expand table

| Name      | Type                     | Description                           | Default | Required |
|-----------|--------------------------|---------------------------------------|---------|----------|
| `logs`    | `list(otelcol.Consumer)` | List of consumers to send logs to.    | `[]`    | no       |
| `metrics` | `list(otelcol.Consumer)` | List of consumers to send metrics to. | `[]`    | no       |
| `traces`  | `list(otelcol.Consumer)` | List of consumers to send traces to.  | `[]`    | no       |

You must specify the `output` block, but all its arguments are optional. By default, telemetry data is dropped. Configure the `metrics`, `logs`, and `traces` arguments accordingly to send telemetry data to other components.

### `debug_metrics`

The `debug_metrics` block configures the metrics that this component generates to monitor its state.

The following arguments are supported:

Expand table

| Name                               | Type      | Description                                          | Default | Required |
|------------------------------------|-----------|------------------------------------------------------|---------|----------|
| `disable_high_cardinality_metrics` | `boolean` | Whether to disable certain high cardinality metrics. | `true`  | no       |

`disable_high_cardinality_metrics` is the Alloy equivalent to the `telemetry.disableHighCardinalityMetrics` feature gate in the OpenTelemetry Collector. It removes attributes that could cause high cardinality metrics. For example, attributes with IP addresses and port numbers in metrics about HTTP and gRPC connections are removed.

> Note
> 
> If configured, `disable_high_cardinality_metrics` only applies to `otelcol.exporter.*` and `otelcol.receiver.*` components.

### `header`

The `header` block configures logic for parsing a log header line into additional attributes added to each log entry. It may only be used when `start_at` is set to `beginning`. The following arguments are supported:

Expand table

| Name                 | Type                | Description                                                 | Default | Required |
|----------------------|---------------------|-------------------------------------------------------------|---------|----------|
| `metadata_operators` | `lists(map(string)` | A list of operators used to parse metadata from the header. |         | yes      |
| `pattern`            | `string`            | A regular expression that matches the header line.          |         | yes      |

If a `header` block isn’t set, no log lines will be treated as header metadata.

The `metadata_operators` list is a list of stanza [operators](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.147.0/pkg/stanza/docs/operators/README.md#what-operators-are-available) that parses metadata from the header. Any attributes created from the embedded operators pipeline will be applied to all log entries in the file.

For example, you might use a `regex_parser` to process a header line that has been identified by the `pattern` expression. The following example shows a fictitious header line, and then the `header` block that would parse an `environment` attribute from it.

text ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```text
HEADER_IDENTIFIER env="production"
...
```

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
otelcol.receiver.filelog "default" {
    ...
    header {
      pattern = '^HEADER_IDENTIFIER .*$'
      metadata_operators = [
        {
          type = "regex_parser"
          regex = 'env="(?P<environment>.+)"'
        }
      ]
    }
}
```

### `multiline`

The `multiline` block configures logic for splitting incoming log entries. The following arguments are supported:

Expand table

| Name                 | Type     | Description                                                     | Default | Required |
|----------------------|----------|-----------------------------------------------------------------|---------|----------|
| `line_end_pattern`   | `string` | A regular expression that matches the end of a log entry.       |         | yes*     |
| `line_start_pattern` | `string` | A regular expression that matches the beginning of a log entry. |         | yes*     |
| `omit_pattern`       | `bool`   | Omit the start/end pattern from the split log entries.          | `false` | no       |

A `multiline` block must contain either `line_start_pattern` or `line_end_pattern`.

If a `multiline` block isn’t set, log entries won’t be split.

### `ordering_criteria`

The `ordering_criteria` block configures the order in which log files discovered will be processed. The following arguments are supported:

Expand table

| Name       | Type     | Description                                                                            | Default | Required |
|------------|----------|----------------------------------------------------------------------------------------|---------|----------|
| `group_by` | `string` | A named capture group from the `regex` attribute used for grouping pre-sort.           | `""`    | no       |
| `regex`    | `string` | A regular expression to capture elements of log files to use in ordering calculations. | `""`    | no       |
| `top_n`    | `int`    | The number of top log files to track when using file ordering.                         | `1`     | no       |

### `sort_by`

The `sort_by` repeatable block configures the way the fields parsed in the `ordering_criteria` block will be applied to sort the discovered log files. The following arguments are supported:

Expand table

| Name        | Type     | Description                                                                  | Default | Required |
|-------------|----------|------------------------------------------------------------------------------|---------|----------|
| `sort_type` | `string` | The type of sorting to apply.                                                |         | yes      |
| `ascending` | `bool`   | Whether to sort in ascending order.                                          | `true`  | no       |
| `layout`    | `string` | The layout of the timestamp to be parsed from a named `regex` capture group. | `""`    | no       |
| `location`  | `string` | The location of the timestamp.                                               | `"UTC"` | no       |
| `regex_key` | `string` | The named capture group from the `regex` attribute to use for sorting.       | `""`    | no       |

`sort_type` must be one of `numeric`, `lexicographic`, `timestamp`, or `mtime`. When using `numeric`, `lexicographic`, or `timestamp` `sort_type`, a named capture group defined in the `regex` attribute in `ordering_criteria` must be provided in `regex_key`. When using `mtime` `sort_type`, the file’s modified time will be used to sort.

The `location` and `layout` arguments are only applicable when `sort_type` is `timestamp`.

The `location` argument specifies a Time Zone identifier. The available locations depend on the local IANA Time Zone database. Refer to the [list of tz database time zones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) in Wikipedia for a non-comprehensive list.

### `retry_on_failure`

The `retry_on_failure` block configures the retry behavior when the receiver encounters an error downstream in the pipeline. A backoff algorithm is used to delay the retry upon subsequent failures. The following arguments are supported:

Expand table

| Name               | Type       | Description                                                                                                               | Default | Required |
|--------------------|------------|---------------------------------------------------------------------------------------------------------------------------|---------|----------|
| `enabled`          | `bool`     | If set to `true` and an error occurs, the receiver will pause reading the log files and resend the current batch of logs. | `false` | no       |
| `initial_interval` | `duration` | The time to wait after first failure to retry.                                                                            | `"1s"`  | no       |
| `max_elapsed_time` | `duration` | The maximum age of a message before the data is discarded.                                                                | `"5m"`  | no       |
| `max_interval`     | `duration` | The maximum time to wait after applying backoff logic.                                                                    | `"30s"` | no       |

If `max_elapsed_time` is set to `0` data is never discarded.

## Exported fields

`otelcol.receiver.filelog` doesn’t export any fields.

## Component health

`otelcol.receiver.filelog` is only reported as unhealthy if given an invalid configuration.

## Debug metrics

`otelcol.receiver.filelog` doesn’t expose any component-specific debug metrics.

## Example

This example reads log entries using the `otelcol.receiver.filelog` receiver and they’re logged by a `otelcol.exporter.debug` component. It expects the logs to start with an ISO8601 compatible timestamp and parses it from the log using the `regex_parser` operator.

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
otelcol.receiver.filelog "default" {
  include = ["/var/log/*.log"]
  operators = [{
    type = "regex_parser",
    regex = "^(?P<timestamp>\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\\.\\d{3,6}Z)",
    timestamp = {
      parse_from = "attributes.timestamp",
      layout = "%Y-%m-%dT%H:%M:%S.%fZ",
      location = "UTC",
    },
  }]
  output {
      logs = [otelcol.exporter.debug.default.input]
  }
}

otelcol.exporter.debug "default" {}
```

## Compatible components

`otelcol.receiver.filelog` can accept arguments from the following components:

- Components that export [OpenTelemetry `otelcol.Consumer`](../../../compatibility/#opentelemetry-otelcolconsumer-exporters)

> Note
> 
> Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details.
