Get started with Grafana integrationsCurrently available IntegrationsKafka integration

Kafka integration for Grafana Cloud

According to Kafka’s official web page, “Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.” Kafka is the most used event streaming platform worldwide, and its ecosystem includes a wide variety of pieces for data governance, querying, batch processing, and connectors.

This integration is based on the Monitoring your event streams: Integrating Confluent with Prometheus and Grafana blog post from Confluent. The integration provides dashboards for your Kafka Broker clusters, Zookeeper clusters, Kafka Connect clusters, Schema Registry clusters, and ksqldb clusters, along with a specific dashboard for topics information and consumption lag.

Configuring the JMX Exporters

Most of the dashboards rely on collecting data through a JMX Exporter running alongside each instance of your Kafka components JMV, as a java agent.

The JMX Exporter GitHub page is very well documented, and you can see the details on how to configure your Kafka JVM there.

For this integration we use a JMX exporter for each Kafka piece being monitored. It’s tested using version 0.12.0 of this exporter.

Here are the config files we used for this example:

Configuring the Grafana Agent

The lag consumption dashboard is fed by an external exporter, which is embedded in the Grafana Agent for ease of use. Use the latest version of the Grafana Agent to enable it.

See this reference for configuring the agent with the Kafka Exporter for more information.

If you modify the Agent’s ConfigMap, you will need to restart the Agent Pod to pick up configuration changes. Use kubectl rollout to restart the Agent:

$ kubectl rollout restart deployment/grafana-agent