Grafana Alloy is the new name for our distribution of the OTel collector. Grafana Agent has been deprecated and is in Long-Term Support (LTS) through October 31, 2025. Grafana Agent will reach an End-of-Life (EOL) on November 1, 2025. Read more about why we recommend migrating to Grafana Alloy.

Important: This documentation is about an older version. It's relevant only to the release noted, many of the features and functions have been updated or replaced. Please view the current version.

Open source


The prometheus.exporter.kafka component embeds kafka_exporter for collecting metrics from a kafka server.


prometheus.exporter.kafka "LABEL" {
    kafka_uris = KAFKA_URI_LIST


You can use the following arguments to configure the exporter’s behavior. Omitted fields take their default values.

kafka_urisarray(string)Address array (host:port) of Kafka server.yes
instancestringTheinstancelabel for metrics, default is the hostname:port of the first kafka_uris. You must manually provide the instance value if there is more than one string in
use_saslboolConnect using SASL/
use_sasl_handshakeboolOnly set this to false if using a non-Kafka SASL proxy.falseno
sasl_usernamestringSASL user
sasl_passwordstringSASL user
sasl_mechanismstringThe SASL SCRAM SHA algorithm sha256 or sha512 as
use_tlsboolConnect using
ca_filestringThe optional certificate authority file for TLS client
cert_filestringThe optional certificate file for TLS client
key_filestringThe optional key file for TLS client
insecure_skip_verifyboolIf set to true, the server’s certificate will not be checked for validity. This makes your HTTPS connections
kafka_versionstringKafka broker version.2.0.0no
use_zookeeper_lagboolIf set to true, use a group from
zookeeper_urisarray(string)Address array (hosts) of zookeeper
kafka_cluster_namestringKafka cluster
metadata_refresh_intervaldurationMetadata refresh interval.1mno
allow_concurrencyboolIf set to true, all scrapes trigger Kafka operations. Otherwise, they will share results. WARNING: Disable this on large clusters.trueno
max_offsetsintThe maximum number of offsets to store in the interpolation table for a partition.1000no
prune_interval_secondsintHow frequently should the interpolation table be pruned, in seconds.30no
topics_filter_regexstringRegex filter for topics to be monitored..*no
groups_filter_regexstringRegex filter for consumer groups to be monitored..*no

Exported fields

The following fields are exported and can be referenced by other components.

targetslist(map(string))The targets that can be used to collect exporter metrics.

For example, the targets can either be passed to a discovery.relabel component to rewrite the targets’ label sets or to a prometheus.scrape component that collects the exposed metrics.

The exported targets use the configured in-memory traffic address specified by the run command.

Component health

prometheus.exporter.kafka is only reported as unhealthy if given an invalid configuration. In those cases, exported fields retain their last healthy values.

Debug information

prometheus.exporter.kafka does not expose any component-specific debug information.

Debug metrics

prometheus.exporter.kafka does not expose any component-specific debug metrics.


This example uses a prometheus.scrape component to collect metrics from prometheus.exporter.kafka:

prometheus.exporter.kafka "example" {
  kafka_uris = ["localhost:9200"]

// Configure a prometheus.scrape component to send metrics to.
prometheus.scrape "demo" {
  targets    = prometheus.exporter.kafka.example.targets
  forward_to = [prometheus.remote_write.demo.receiver]

prometheus.remote_write "demo" {
  endpoint {

    basic_auth {
      username = USERNAME
      password = PASSWORD

Replace the following:

  • PROMETHEUS_REMOTE_WRITE_URL: The URL of the Prometheus remote_write-compatible server to send metrics to.
  • USERNAME: The username to use for authentication to the remote_write API.
  • PASSWORD: The password to use for authentication to the remote_write API.