Menu

Caution

Grafana Alloy is the new name for our distribution of the OTel collector. Grafana Agent has been deprecated and is in Long-Term Support (LTS) through October 31, 2025. Grafana Agent will reach an End-of-Life (EOL) on November 1, 2025. Read more about why we recommend migrating to Grafana Alloy.

Important: This documentation is about an older version. It's relevant only to the release noted, many of the features and functions have been updated or replaced. Please view the current version.

Open source

otelcol.receiver.kafka

otelcol.receiver.kafka accepts telemetry data from a Kafka broker and forwards it to other otelcol.* components.

NOTE: otelcol.receiver.kafka is a wrapper over the upstream OpenTelemetry Collector kafka receiver from the otelcol-contrib distribution. Bug reports or feature requests will be redirected to the upstream repository, if necessary.

Multiple otelcol.receiver.kafka components can be specified by giving them different labels.

Usage

river
otelcol.receiver.kafka "LABEL" {
  brokers          = ["BROKER_ADDR"]
  protocol_version = "PROTOCOL_VERSION"

  output {
    metrics = [...]
    logs    = [...]
    traces  = [...]
  }
}

Arguments

The following arguments are supported:

NameTypeDescriptionDefaultRequired
brokersarray(string)Kafka brokers to connect to.yes
protocol_versionstringKafka protocol version to use.yes
topicstringKafka topic to read from."otlp_spans"no
encodingstringEncoding of payload read from Kafka."otlp_proto"no
group_idstringConsumer group to consume messages from."otel-collector"no
client_idstringConsumer client ID to use."otel-collector"no

The encoding argument determines how to decode messages read from Kafka. encoding must be one of the following strings:

  • "otlp_proto": Decode messages as OTLP protobuf.
  • "jaeger_proto": Decode messages as a single Jaeger protobuf span.
  • "jaeger_json": Decode messages as a single Jaeger JSON span.
  • "zipkin_proto": Decode messages as a list of Zipkin protobuf spans.
  • "zipkin_json": Decode messages as a list of Zipkin JSON spans.
  • "zipkin_thrift": Decode messages as a list of Zipkin Thrift spans.
  • "raw": Copy the message bytes into the body of a log record.

"otlp_proto" must be used to read all telemetry types from Kafka; other encodings are signal-specific.

Blocks

The following blocks are supported inside the definition of otelcol.receiver.kafka:

HierarchyBlockDescriptionRequired
authenticationauthenticationConfigures authentication for connecting to Kafka brokers.no
authentication > plaintextplaintextAuthenticates against Kafka brokers with plaintext.no
authentication > saslsaslAuthenticates against Kafka brokers with SASL.no
authentication > sasl > aws_mskaws_mskAdditional SASL parameters when using AWS_MSK_IAM.no
authentication > tlstlsConfigures TLS for connecting to the Kafka brokers.no
authentication > kerberoskerberosAuthenticates against Kafka brokers with Kerberos.no
metadatametadataConfigures how to retrieve metadata from Kafka brokers.no
metadata > retryretryConfigures how to retry metadata retrieval.no
autocommitautocommitConfigures how to automatically commit updated topic offsets to back to the Kafka brokers.no
message_markingmessage_markingConfigures when Kafka messages are marked as read.no
outputoutputConfigures where to send received telemetry data.yes

The > symbol indicates deeper levels of nesting. For example, authentication > tls refers to a tls block defined inside an authentication block.

authentication block

The authentication block holds the definition of different authentication mechanisms to use when connecting to Kafka brokers. It doesn’t support any arguments and is configured fully through inner blocks.

plaintext block

The plaintext block configures PLAIN authentication against Kafka brokers.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
usernamestringUsername to use for PLAIN authentication.yes
passwordsecretPassword to use for PLAIN authentication.yes

sasl block

The sasl block configures SASL authentication against Kafka brokers.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
usernamestringUsername to use for SASL authentication.yes
passwordsecretPassword to use for SASL authentication.yes
mechanismstringSASL mechanism to use when authenticating.yes

The mechanism argument can be set to one of the following strings:

  • "PLAIN"
  • "AWS_MSK_IAM"
  • "SCRAM-SHA-256"
  • "SCRAM-SHA-512"

When mechanism is set to "AWS_MSK_IAM", the aws_msk child block must also be provided.

aws_msk block

The aws_msk block configures extra parameters for SASL authentication when using the AWS_MSK_IAM mechanism.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
regionstringAWS region the MSK cluster is based in.yes
broker_addrstringMSK address to connect to for authentication.yes

tls block

The tls block configures TLS settings used for connecting to the Kafka brokers. If the tls block isn’t provided, TLS won’t be used for communication.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
ca_filestringPath to the CA file.no
cert_filestringPath to the TLS certificate.no
key_filestringPath to the TLS certificate key.no
min_versionstringMinimum acceptable TLS version for connections."TLS 1.2"no
max_versionstringMaximum acceptable TLS version for connections."TLS 1.3"no
reload_intervaldurationFrequency to reload the certificates.no
client_ca_filestringPath to the CA file used to authenticate client certificates.no

kerberos block

The kerberos block configures Kerberos authentication against the Kafka broker.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
service_namestringKerberos service name.no
realmstringKerberos realm.no
use_keytabstringEnables using keytab instead of password.no
usernamestringKerberos username to authenticate as.yes
passwordsecretKerberos password to authenticate with.no
config_filestringPath to Kerberos location (for example, /etc/krb5.conf).no
keytab_filestringPath to keytab file (for eaxmple, /etc/security/kafka.keytab).no

When use_keytab is false, the password argument is required. When use_keytab is true, the file pointed to by the keytab_file argument is used for authentication instead. At most one of password or keytab_file must be provided.

metadata block

The metadata block configures how to retrieve and store metadata from the Kafka broker.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
include_all_topicsboolWhen true, maintains metadata for all topics.trueno

If the include_all_topics argument is true, otelcol.receiver.kafka maintains a full set of metadata for all topics rather than the minimal set that has been necessary so far. Including the full set of metadata is more convenient for users but can consume a substantial amount of memory if you have many topics and partitions.

Retrieving metadata may fail if the Kafka broker is starting up at the same time as the otelcol.receiver.kafka component. The retry child block can be provided to customize retry behavior.

retry block

The retry block configures how to retry retrieving metadata when retrieval fails.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
max_retriesnumberHow many times to reattempt retrieving metadata.3no
backoffdurationTime to wait between retries."250ms"no

autocommit block

The autocommit block configures how to automatically commit updated topic offsets back to the Kafka brokers.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
enableboolEnable autocommitting updated topic offsets.trueno
intervaldurationHow frequently to autocommit."1s"no

message_marking block

The message_marking block configures when Kafka messages are marked as read.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
after_executionboolMark messages after forwarding telemetry data to other components.falseno
include_unsuccessfulboolWhether failed forwards should be marked as read.falseno

By default, a Kafka message is marked as read immediately after it is retrieved from the Kafka broker. If the after_execution argument is true, messages are only read after the telemetry data is forwarded to components specified in the output block.

When after_execution is true, messages are only marked as read when they are decoded successfully and components where the data was forwarded did not return an error. If the include_unsuccessful argument is true, messages are marked as read even if decoding or forwarding failed. Setting include_unsuccessful has no effect if after_execution is false.

WARNING: Setting after_execution to true and include_unsuccessful to false can block the entire Kafka partition if message processing returns a permanent error, such as failing to decode.

output block

The output block configures a set of components to forward resulting telemetry data to.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
metricslist(otelcol.Consumer)List of consumers to send metrics to.[]no
logslist(otelcol.Consumer)List of consumers to send logs to.[]no
traceslist(otelcol.Consumer)List of consumers to send traces to.[]no

The output block must be specified, but all of its arguments are optional. By default, telemetry data is dropped. To send telemetry data to other components, configure the metrics, logs, and traces arguments accordingly.

Exported fields

otelcol.receiver.kafka does not export any fields.

Component health

otelcol.receiver.kafka is only reported as unhealthy if given an invalid configuration.

Debug information

otelcol.receiver.kafka does not expose any component-specific debug information.

Example

This example forwards read telemetry data through a batch processor before finally sending it to an OTLP-capable endpoint:

river
otelcol.receiver.kafka "default" {
  brokers          = ["localhost:9092"]
  protocol_version = "2.0.0"

  output {
    metrics = [otelcol.processor.batch.default.input]
    logs    = [otelcol.processor.batch.default.input]
    traces  = [otelcol.processor.batch.default.input]
  }
}

otelcol.processor.batch "default" {
  output {
    metrics = [otelcol.exporter.otlp.default.input]
    logs    = [otelcol.exporter.otlp.default.input]
    traces  = [otelcol.exporter.otlp.default.input]
  }
}

otelcol.exporter.otlp "default" {
  client {
    endpoint = env("OTLP_ENDPOINT")
  }
}