otelcol.processor.batch
otelcol.processor.batch
accepts telemetry data from other otelcol
components and places them into batches.
Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data.
This processor supports both size and time based batching.
Grafana Labs strongly recommends that you configure the batch processor on every Alloy that uses OpenTelemetry (otelcol) Alloy components.
Define the batch processor in the pipeline after the otelcol.processor.memory_limiter
as well as any sampling processors.
Batching should happen after any processing that drops data such as sampling.
Note
otelcol.processor.batch
is a wrapper over the upstream OpenTelemetry Collectorbatch
processor. Bug reports or feature requests will be redirected to the upstream repository, if necessary.
You can specify multiple otelcol.processor.batch
components by giving them different labels.
Usage
otelcol.processor.batch "<LABEL>" {
output {
metrics = [...]
logs = [...]
traces = [...]
}
}
Arguments
You can use the following arguments with otelcol.processor.batch
:
otelcol.processor.batch
accumulates data into a batch until one of the following events happens:
- The duration specified by
timeout
elapses since the time the last batch was sent. - The number of spans, log records, or metric data points processed reaches or exceeds the number specified by
send_batch_size
.
send_batch_size
acts as a trigger threshold, not the exact batch size. When data arrives in large chunks, the actual batch size may exceed send_batch_size
unless you configure send_batch_max_size
to enforce an upper limit.
Logs, traces, and metrics are processed independently.
For example, if send_batch_size
is set to 1000
:
- The processor may, at the same time, buffer 1,000 spans, 1,000 log records, and 1,000 metric data points before flushing them.
- If there are enough spans for a batch of spans, for example 1,000 or more, but not enough for a batch of metric data points, for example less than 1,000, then only the spans are flushed.
Use send_batch_max_size
to limit the amount of data contained in a single batch:
- When set to
0
, batches can be any size. - When set to a non-zero value,
send_batch_max_size
must be greater than or equal tosend_batch_size
. Every batch contains up to thesend_batch_max_size
number of spans, log records, or metric data points. The excess spans, log records, or metric data points aren’t lost - instead, they’re added to the next batch.
For example, assume you set send_batch_size
to the default 8192
and there are 8,000 batched spans.
If the batch processor receives 8,000 more spans at once, its behavior depends on how you configure send_batch_max_size
:
- If you set
send_batch_max_size
to0
, the total batch size would be 16,000 spans which are then flushed as a single batch. - If you set
send_batch_max_size
to10000
, then the batch has a limit of 10,000 spans and the processor adds the remaining 6,000 spans to the next batch.
This demonstrates how send_batch_size
acts as a trigger while send_batch_max_size
enforces the actual maximum batch size.
metadata_cardinality_limit
applies for the lifetime of the process.
Configure receivers with include_metadata = true
so that metadata keys are available to the processor.
Each distinct combination of metadata triggers the allocation of a background task in the Alloy process that runs for the lifetime of the process, and each background task holds one pending batch of up to send_batch_size
telemetry items such as spans, log records, or metric data points.
Batching by metadata can therefore substantially increase the amount of memory dedicated to batching.
The maximum number of distinct combinations is limited to the configured metadata_cardinality_limit
, which defaults to 1000 to limit memory impact.
Blocks
You can use the following blocks with otelcol.processor.batch
:
output
RequiredThe output
block configures a set of components to forward resulting telemetry data to.
The following arguments are supported:
You must specify the output
block, but all its arguments are optional.
By default, telemetry data is dropped.
Configure the metrics
, logs
, and traces
arguments accordingly to send telemetry data to other components.
debug_metrics
The debug_metrics
block configures the metrics that this component generates to monitor its state.
The following arguments are supported:
disable_high_cardinality_metrics
is the Alloy equivalent to the telemetry.disableHighCardinalityMetrics
feature gate in the OpenTelemetry Collector.
It removes attributes that could cause high cardinality metrics.
For example, attributes with IP addresses and port numbers in metrics about HTTP and gRPC connections are removed.
Note
If configured,
disable_high_cardinality_metrics
only applies tootelcol.exporter.*
andotelcol.receiver.*
components.
Exported fields
The following fields are exported and can be referenced by other components:
input
accepts otelcol.Consumer
data for any telemetry signal including metrics, logs, or traces.
Component health
otelcol.processor.batch
is only reported as unhealthy if given an invalid configuration.
Debug information
otelcol.processor.batch
doesn’t expose any component-specific debug information.
Debug metrics
otelcol_processor_batch_batch_send_size_bytes
(histogram): Number of bytes in each sent batch.otelcol_processor_batch_batch_send_size
(histogram): Number of units in the batch.otelcol_processor_batch_batch_size_trigger_send_total
(counter): Number of times a batch was sent due to a size trigger.otelcol_processor_batch_metadata_cardinality
(gauge): Number of distinct metadata value combinations processed.otelcol_processor_batch_timeout_trigger_send_total
(counter): Number of times a batch was sent due to a timeout trigger.
Examples
Basic usage
This example batches telemetry data before sending it to otelcol.exporter.otlp
for further processing:
otelcol.processor.batch "default" {
output {
metrics = [otelcol.exporter.otlp.production.input]
logs = [otelcol.exporter.otlp.production.input]
traces = [otelcol.exporter.otlp.production.input]
}
}
otelcol.exporter.otlp "production" {
client {
endpoint = sys.env("OTLP_SERVER_ENDPOINT")
}
}
Batching with a timeout
This example buffers up to 10,000 spans, log records, or metric data points for up to 10 seconds.
Because send_batch_max_size
isn’t set and defaults to 0, the actual batch size may exceed 10,000 if large amounts of data arrive simultaneously.
otelcol.processor.batch "default" {
timeout = "10s"
send_batch_size = 10000
output {
metrics = [otelcol.exporter.otlp.production.input]
logs = [otelcol.exporter.otlp.production.input]
traces = [otelcol.exporter.otlp.production.input]
}
}
otelcol.exporter.otlp "production" {
client {
endpoint = sys.env("OTLP_SERVER_ENDPOINT")
}
}
Metadata-based batching
Batching by metadata enables support for multi-tenant OpenTelemetry pipelines with batching over groups of data having the same authorization metadata.
otelcol.receiver.jaeger "default" {
protocols {
grpc {
include_metadata = true
}
thrift_http {}
thrift_binary {}
thrift_compact {}
}
output {
traces = [otelcol.processor.batch.default.input]
}
}
otelcol.processor.batch "default" {
// batch data by tenant id
metadata_keys = ["tenant_id"]
// limit to 10 batch processor instances before raising errors
metadata_cardinality_limit = 123
output {
traces = [otelcol.exporter.otlp.production.input]
}
}
otelcol.exporter.otlp "production" {
client {
endpoint = sys.env("OTLP_SERVER_ENDPOINT")
}
}
Compatible components
otelcol.processor.batch
can accept arguments from the following components:
- Components that export OpenTelemetry
otelcol.Consumer
otelcol.processor.batch
has exports that can be consumed by the following components:
- Components that consume OpenTelemetry
otelcol.Consumer
Note
Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details.