prometheus.exporter.kafka
General availability (GA) Open source

prometheus.exporter.kafka

The prometheus.exporter.kafka component embeds the kafka_exporter for collecting metrics from a Kafka server.

Usage

alloy
prometheus.exporter.kafka "<LABEL>" {
    kafka_uris = "<KAFKA_URI_LIST>"
}

Arguments

You can use the following arguments with prometheus.exporter.kafka:

NameTypeDescriptionDefaultRequired
kafka_urisarray(string)Address array (host:port) of Kafka server.yes
allow_auto_topic_creationboolIf true, the broker may auto-create topics that you requested which don’t already exist.no
allow_concurrencyboolIf set to true, all scrapes trigger Kafka operations. Otherwise, they share results. WARNING: Disable this on large clusters.trueno
ca_filestringThe optional certificate authority file for TLS client authentication.no
cert_filestringThe optional certificate file for TLS client authentication.no
groups_exclude_regexstringRegex that determines which consumer groups to exclude.^$no
groups_filter_regexstringRegex filter for consumer groups to be monitored..*no
gssapi_kerberos_auth_typestringKerberos auth type. Either keytabAuth or userAuth.no
gssapi_kerberos_config_pathstringKerberos configuration path.no
gssapi_key_tab_pathstringKerberos keytab file path.no
gssapi_realmstringKerberos realm.no
gssapi_service_namestringService name when using Kerberos Authorizationno
insecure_skip_verifyboolIf set to true, the server’s certificate isn’t checked for validity. This makes your HTTPS connections insecure.no
instancestringTheinstancelabel for metrics, default is the hostname:port of the first kafka_uris. You must manually provide the instance value if there is more than one string in kafka_uris.no
kafka_cluster_namestringKafka cluster name.no
kafka_versionstringKafka broker version.2.0.0no
key_filestringThe optional key file for TLS client authentication.no
max_offsetsintThe maximum number of offsets to store in the interpolation table for a partition.1000no
metadata_refresh_intervaldurationMetadata refresh interval.1mno
offset_show_allboolIf true, the broker may auto-create topics that you requested which don’t already exist.trueno
prune_interval_secondsintDeprecated (no-op), use metadata_refresh_interval instead.30no
sasl_disable_pafx_fastboolConfigure the Kerberos client to not use PA_FX_FAST.no
sasl_mechanismstringThe SASL SCRAM SHA algorithm SHA256 or SHA512 as mechanism.no
sasl_passwordstringSASL user password.no
sasl_usernamestringSASL user name.no
tls_server_namestringUsed to verify the hostname on the returned certificates unless tls.insecure-skip-tls-verify is given. If you don’t provide the Kafka server name, the hostname is taken from the URL.no
topic_workersintMinimum number of topics to monitor.100no
topics_exclude_regexstringRegex that determines which topics to exclude.^$no
topics_filter_regexstringRegex filter for topics to be monitored..*no
use_sasl_handshakeboolOnly set this to false if using a non-Kafka SASL proxy.trueno
use_saslboolConnect using SASL/PLAIN.no
use_tlsboolConnect using TLS.no
use_zookeeper_lagboolIf set to true, use a group from zookeeper.no
zookeeper_urisarray(string)Address array (hosts) of zookeeper server.no

Blocks

The prometheus.exporter.kafka component doesn’t support any blocks. You can configure this component with arguments.

Exported fields

The following fields are exported and can be referenced by other components.

NameTypeDescription
targetslist(map(string))The targets that can be used to collect exporter metrics.

For example, the targets can either be passed to a discovery.relabel component to rewrite the targets’ label sets or to a prometheus.scrape component that collects the exposed metrics.

The exported targets use the configured in-memory traffic address specified by the run command.

Component health

prometheus.exporter.kafka is only reported as unhealthy if given an invalid configuration. In those cases, exported fields retain their last healthy values.

Debug information

prometheus.exporter.kafka doesn’t expose any component-specific debug information.

Debug metrics

prometheus.exporter.kafka doesn’t expose any component-specific debug metrics.

Example

This example uses a prometheus.scrape component to collect metrics from prometheus.exporter.kafka:

alloy
prometheus.exporter.kafka "example" {
  kafka_uris = ["localhost:9200"]
}

// Configure a prometheus.scrape component to send metrics to.
prometheus.scrape "demo" {
  targets    = prometheus.exporter.kafka.example.targets
  forward_to = [prometheus.remote_write.demo.receiver]
}

prometheus.remote_write "demo" {
  endpoint {
    url = "<PROMETHEUS_REMOTE_WRITE_URL>"

    basic_auth {
      username = "<USERNAME>"
      password = "<PASSWORD>"
    }
  }
}

Replace the following:

  • <PROMETHEUS_REMOTE_WRITE_URL>: The URL of the Prometheus remote_write compatible server to send metrics to.
  • <USERNAME>: The username to use for authentication to the remote_write API.
  • <PASSWORD>: The password to use for authentication to the remote_write API.

Compatible components

prometheus.exporter.kafka has exports that can be consumed by the following components:

Note

Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details.