Kafka integration for Grafana Cloud
Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.
This integration includes 14 useful alerts and 7 pre-built dashboards to help monitor and visualize Kafka metrics and logs.
Before you begin
For the integration to work, you must configure a JMX exporter on each instance composing your Kafka Cluster, including all brokers, zookeepers, ksqlDB, schema registries, and Kafka Connect nodes.
Each of these instances has its own JMX Exporter config file. The following files should be used for each respective Kafka component. For more details on how to configure your Kafka JVM with the JMX exporter, refer to the JMX Exporter documentation.
We strongly recommend that you configure a separate user for Grafana Alloy and give it only the strictly mandatory security privileges necessary for monitoring your node.
Install Kafka integration for Grafana Cloud
- In your Grafana Cloud stack, click Connections in the left-hand menu.
- Find Kafka and click its tile to open the integration.
- Review the prerequisites in the Configuration Details tab and set up Grafana Agent to send Kafka metrics and logs to your Grafana Cloud instance.
- Click Install to add this integration’s pre-built dashboards and alerts to your Grafana Cloud instance, and you can start monitoring your Kafka setup.
Configuration snippets for Grafana Alloy
Advanced mode
To instruct Grafana Alloy to scrape your Kafka nodes, go though the subsequent instructions.
The snippets provide examples to guide you through the configuration process.
First, manually copy and append the following snippets into your Grafana Alloy configuration file.
Then follow the instructions below to modify the necessary variables.
Advanced metrics snippets
After enabling the JMX exporter in each node, instruct Grafana Alloy to scrape them.
One discovery.relabel must be added for each node composing your cluster (Kafka Server, Schema Registry, ksqlDB, Zookeeper, Kafka Connect) to avoid instance
label conflicts.
Make sure to match the name instance
label name used in the exporter snippet for the Kafka Server nodes.
Configure the following properties within each discovery.relabel
component:
__address__
: The address to your Kafka node.<your-instance-name>
: Theinstance
label for all metrics scraped from this Kafka node.<your-cluster-name>
: Thekafka_cluster
label to group your Kafka nodes within a cluster. Set the same value for all nodes within your cluster.
Finally, reference each discovery.relabel
component within the targets
property of the prometheus.scrape component.
Advanced integrations snippets
To monitor consumption lag, you must add a pair of prometheus.exporter.kafka and discovery.relabel to your Grafana Alloy configuration file for each Kafka Server you monitor, to avoid instance
label conflicts.
Configure the following property within the prometheus.exporter.kafka
component:
kafka_uris
: The URI to connect to your Kafka Server node.
Refer to prometheus.exporter.kafka in Grafana Alloy reference documentation for a complete description of the configuration options.
Configure the following properties within the discovery.relabel
component:
<your-instance-name>
: this will set theinstance
label for all metrics from this Kafka Server node.<your-cluster-name>
: this will set thekafka_cluster
label to group your Kafka nodes within a cluster. Set the same value for all nodes within your cluster.
Reference each discovery.relabel
component within the targets
property of prometheus.scrape component.
Finally, reference each prometheus.exporter.kafka
component within the targets
property of the prometheus.scrape component.
Advanced logs snippets
darwin
To monitor your Kafka brokers logs, you will use a combination of the following components:
local.file_match defines where to find the log file to be scraped. Change the following properties according to your environment:
__path__
: The path to the log file.instance
label:constants.hostname
sets theinstance
label to your Grafana Alloy server hostname. If that is not suitable, change it to a value uniquely identifies this Kafka broker instance. Make sure this label value is the same for all telemetry data collected for this instance.kafka_cluster
label: Kafka cluster identifier.
loki.process defines how to process logs before sending it to Loki.
loki.source.file sends logs to Loki.
linux
To monitor your Kafka brokers logs, you will use a combination of the following components:
local.file_match defines where to find the log file to be scraped. Change the following properties according to your environment:
__path__
: The path to the log file.instance
label:constants.hostname
sets theinstance
label to your Grafana Alloy server hostname. If that is not suitable, change it to a value uniquely identifies this Kafka broker instance. Make sure this label value is the same for all telemetry data collected for this instance.kafka_cluster
label: Kafka cluster identifier.
loki.process defines how to process logs before sending it to Loki.
loki.source.file sends logs to Loki.
windows
To monitor your Kafka brokers logs, you will use a combination of the following components:
local.file_match defines where to find the log file to be scraped. Change the following properties according to your environment:
__path__
: The path to the log file.instance
label:constants.hostname
sets theinstance
label to your Grafana Alloy server hostname. If that is not suitable, change it to a value uniquely identifies this Kafka broker instance. Make sure this label value is the same for all telemetry data collected for this instance.kafka_cluster
label: Kafka cluster identifier.
loki.process defines how to process logs before sending it to Loki.
loki.source.file sends logs to Loki.
Grafana Agent static configuration (deprecated)
The following section shows configuration for running Grafana Agent in static mode which is deprecated. You should use Grafana Alloy for all new deployments.
Before you begin
In order for the integration to work, you must configure a JMX exporter on each instance composing your Kafka cluster, including all brokers, zookeepers, ksqldb, schema registries and kafka connect nodes.
Each of these instances has its own JMX exporter config file. The following files should be used for each respective kafka component. For more details on how to configure your Kafka JVM with the JMX exporter, please refer to the JMX Exporter documentation.
Please note, that JVM metrics are only available if JMX exporter is deployed with ‘javaagent’, not ‘http-server’ mode.
We strongly recommend that you configure a separate user for the agent, and give it only the strictly mandatory security privileges necessary for monitoring your node, as per the documentation.
Install Kafka integration for Grafana Cloud
- In your Grafana Cloud stack, click Connections in the left-hand menu.
- Find Kafka and click its tile to open the integration.
- Review the prerequisites in the Configuration Details tab and set up Grafana Agent to send Kafka metrics and logs to your Grafana Cloud instance.
- Click Install to add this integration’s pre-built dashboards and alerts to your Grafana Cloud instance, and you can start monitoring your Kafka setup.
Post-install configuration for the Kafka integration
After enabling the metrics generation, instruct Grafana Agent to scrape your Kafka nodes.
The JMX exporter exposes a /metrics
endpoint. To scrape it, add the snippets above to your agent configuration file.
Make sure to change targets
in the snippets according to your environment.
You also need to configure the kafka_cluster
label in each snippet to be able to group your Kafka nodes within the dashboards and alerts.
If you want to monitor topics, consumer groups stats as well as consumption lag, you will need to enable the kafka_exporter
, which is embbeded in the Grafana Agent.
Enable the integration by adding the provided snippet to your agent configuration file.
For a full description of configuration options see Grafana Agent configuration reference in the agent documentation.
For best dashboards experience and in order to see metrics and logs correlated ensure the following:
kafka_cluster
andinstance
labels values must match forkafka_exporter
(integrations),metrics
, andlogs
in the Agent configuration file.instance
label must be set to a value that uniquely identifies your Kafka broker node. It is placed automatically by the config snippets.
Configuration snippets for Grafana Agent
Below integrations
, insert the following lines and change the URLs according to your environment:
kafka_exporter: # one job per node
enabled: true
kafka_uris: ['kafka-node1:9091']
instance: '<your-instance-name>'
relabel_configs:
- replacement: 'integrations/kafka'
target_label: job
- replacement: '<your-cluster-name>'
target_label: kafka_cluster
Below metrics.configs.scrape_configs
, insert the following lines and change the URLs according to your environment:
- job_name: integrations/kafka # one job per node
relabel_configs:
- replacement: '<your-instance-name>'
target_label: instance
- replacement: '<your-cluster-name>'
target_label: kafka_cluster
static_configs:
- targets: ['kafka-node:7001']
- job_name: integrations/kafka-zookeeper # one job per node
relabel_configs:
- replacement: '<your-instance-name>'
target_label: instance
- replacement: '<your-cluster-name>'
target_label: kafka_cluster
- replacement: 'integrations/kafka'
target_label: job
static_configs:
- targets: ['zookeeper-node:7001']
- job_name: integrations/kafka-connect # one job per node
relabel_configs:
- replacement: '<your-instance-name>'
target_label: instance
- replacement: '<your-cluster-name>'
target_label: kafka_cluster
- replacement: 'integrations/kafka'
target_label: job
static_configs:
- targets: ['kafka-connect-node:7001']
- job_name: integrations/kafka-schemaregistry # one job per node
relabel_configs:
- replacement: '<your-instance-name>'
target_label: instance
- replacement: '<your-cluster-name>'
target_label: kafka_cluster
- replacement: 'integrations/kafka'
target_label: job
static_configs:
- targets: ['kafka-schemaregistry-node:7001']
- job_name: integrations/kafka-ksqldb # one job per node
relabel_configs:
- replacement: '<your-instance-name>'
target_label: instance
- replacement: '<your-cluster-name>'
target_label: kafka_cluster
- replacement: 'integrations/kafka'
target_label: job
static_configs:
- targets: ['kafka-ksqldb-node:7001']
Below logs.configs.scrape_configs
, insert the following lines according to your environment.
- job_name: integrations/kafka
static_configs:
- targets:
- localhost
labels:
kafka_cluster: '<your-cluster-name>'
job: integrations/kafka
instance: '<your-instance-name>'
__path__: /var/log/kafka/server.log
pipeline_stages:
- multiline:
firstline: '^\[(\d+-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3})\]'
# https://regex101.com/r/T4qelN/1
- regex:
# Flag (?s:.*) needs to be set for regex stage to capture full traceback log in the extracted map.
expression: '\[(?P<timestamp>\d+-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3})\] (?P<level>[A-Z]+) \[(?P<context>.+?)\] (?P<msg>(.+)) \((?P<logger>.+)\)(?P<exception>(?s:.*))'
- template:
source: level
template: "{{ .level | ToLower }}"
- labels:
level:
logger:
Full example configuration for Grafana Agent
Refer to the following Grafana Agent configuration for a complete example that contains all the snippets used for the Kafka integration. This example also includes metrics that are sent to monitor your Grafana Agent instance.
integrations:
prometheus_remote_write:
- basic_auth:
password: <your_prom_pass>
username: <your_prom_user>
url: <your_prom_url>
agent:
enabled: true
relabel_configs:
- action: replace
source_labels:
- agent_hostname
target_label: instance
- action: replace
target_label: job
replacement: "integrations/agent-check"
metric_relabel_configs:
- action: keep
regex: (prometheus_target_sync_length_seconds_sum|prometheus_target_scrapes_.*|prometheus_target_interval.*|prometheus_sd_discovered_targets|agent_build.*|agent_wal_samples_appended_total|process_start_time_seconds)
source_labels:
- __name__
# Add here any snippet that belongs to the `integrations` section.
# For a correct indentation, paste snippets copied from Grafana Cloud at the beginning of the line.
kafka_exporter: # one job per node
enabled: true
kafka_uris: ['kafka-node1:9091']
instance: '<your-instance-name>'
relabel_configs:
- replacement: 'integrations/kafka'
target_label: job
- replacement: '<your-cluster-name>'
target_label: kafka_cluster
logs:
configs:
- clients:
- basic_auth:
password: <your_loki_pass>
username: <your_loki_user>
url: <your_loki_url>
name: integrations
positions:
filename: /tmp/positions.yaml
scrape_configs:
# Add here any snippet that belongs to the `logs.configs.scrape_configs` section.
# For a correct indentation, paste snippets copied from Grafana Cloud at the beginning of the line.
- job_name: integrations/kafka
static_configs:
- targets:
- localhost
labels:
kafka_cluster: '<your-cluster-name>'
job: integrations/kafka
instance: '<your-instance-name>'
__path__: /var/log/kafka/server.log
pipeline_stages:
- multiline:
firstline: '^\[(\d+-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3})\]'
# https://regex101.com/r/T4qelN/1
- regex:
# Flag (?s:.*) needs to be set for regex stage to capture full traceback log in the extracted map.
expression: '\[(?P<timestamp>\d+-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3})\] (?P<level>[A-Z]+) \[(?P<context>.+?)\] (?P<msg>(.+)) \((?P<logger>.+)\)(?P<exception>(?s:.*))'
- template:
source: level
template: "{{ .level | ToLower }}"
- labels:
level:
logger:
metrics:
configs:
- name: integrations
remote_write:
- basic_auth:
password: <your_prom_pass>
username: <your_prom_user>
url: <your_prom_url>
scrape_configs:
# Add here any snippet that belongs to the `metrics.configs.scrape_configs` section.
# For a correct indentation, paste snippets copied from Grafana Cloud at the beginning of the line.
- job_name: integrations/kafka # one job per node
relabel_configs:
- replacement: '<your-instance-name>'
target_label: instance
- replacement: '<your-cluster-name>'
target_label: kafka_cluster
static_configs:
- targets: ['kafka-node:7001']
- job_name: integrations/kafka-zookeeper # one job per node
relabel_configs:
- replacement: '<your-instance-name>'
target_label: instance
- replacement: '<your-cluster-name>'
target_label: kafka_cluster
- replacement: 'integrations/kafka'
target_label: job
static_configs:
- targets: ['zookeeper-node:7001']
- job_name: integrations/kafka-connect # one job per node
relabel_configs:
- replacement: '<your-instance-name>'
target_label: instance
- replacement: '<your-cluster-name>'
target_label: kafka_cluster
- replacement: 'integrations/kafka'
target_label: job
static_configs:
- targets: ['kafka-connect-node:7001']
- job_name: integrations/kafka-schemaregistry # one job per node
relabel_configs:
- replacement: '<your-instance-name>'
target_label: instance
- replacement: '<your-cluster-name>'
target_label: kafka_cluster
- replacement: 'integrations/kafka'
target_label: job
static_configs:
- targets: ['kafka-schemaregistry-node:7001']
- job_name: integrations/kafka-ksqldb # one job per node
relabel_configs:
- replacement: '<your-instance-name>'
target_label: instance
- replacement: '<your-cluster-name>'
target_label: kafka_cluster
- replacement: 'integrations/kafka'
target_label: job
static_configs:
- targets: ['kafka-ksqldb-node:7001']
global:
scrape_interval: 60s
wal_directory: /tmp/grafana-agent-wal
Dashboards
The Kafka integration installs the following dashboards in your Grafana Cloud instance to help monitor your system.
- Kafka Connect Overview
- Kafka logs
- Kafka overview
- Kafka topic overview
- Schema Registry Overview
- ZooKeeper overview
- ksqldb Overview
Kafka overview dashboard
Kafka topics dashboard
Kafka Connect Overview dashboard
Alerts
The Kafka integration includes the following useful alerts:
kafka-kafka-alerts
kafka-jvm-alerts
kafka-zookeeper-jvm-alerts
Metrics
The most important metrics provided by the Kafka integration, which are used on the pre-built dashboards and Prometheus alerts, are as follows:
- avg_latency
- java_lang_classloading_loadedclasscount
- java_lang_memory_heapmemoryusage_committed
- java_lang_memory_heapmemoryusage_max
- java_lang_memory_heapmemoryusage_used
- java_lang_memory_nonheapmemoryusage_committed
- java_lang_memory_nonheapmemoryusage_max
- java_lang_memory_nonheapmemoryusage_used
- java_lang_operatingsystem_cpuload
- java_lang_operatingsystem_processcputime
- java_lang_operatingsystem_systemloadaverage
- java_lang_runtime_starttime
- java_lang_runtime_uptime
- java_lang_threading_daemonthreadcount
- java_lang_threading_peakthreadcount
- java_lang_threading_threadcount
- jvm_buffer_pool_capacity_bytes
- jvm_buffer_pool_used_bytes
- jvm_classes_loaded
- jvm_gc_collection_seconds_count
- jvm_gc_collection_seconds_sum
- jvm_memory_bytes_committed
- jvm_memory_bytes_max
- jvm_memory_bytes_used
- jvm_memory_committed_bytes
- jvm_memory_max_bytes
- jvm_memory_pool_allocated_bytes_total
- jvm_memory_pool_bytes_committed
- jvm_memory_pool_bytes_max
- jvm_memory_pool_bytes_used
- jvm_memory_used_bytes
- jvm_threads_current
- jvm_threads_daemon
- jvm_threads_deadlocked
- jvm_threads_peak
- jvm_threads_state
- kafka_cluster_partition_underminisr
- kafka_cluster_partition_underreplicated
- kafka_connect_app_info
- kafka_connect_connect_metrics_connection_count
- kafka_connect_connect_metrics_failed_authentication_total
- kafka_connect_connect_metrics_incoming_byte_rate
- kafka_connect_connect_metrics_io_ratio
- kafka_connect_connect_metrics_network_io_rate
- kafka_connect_connect_metrics_outgoing_byte_rate
- kafka_connect_connect_metrics_request_rate
- kafka_connect_connect_metrics_response_rate
- kafka_connect_connect_metrics_successful_authentication_rate
- kafka_connect_connect_worker_metrics_connector_count
- kafka_connect_connect_worker_metrics_connector_destroyed_task_count
- kafka_connect_connect_worker_metrics_connector_failed_task_count
- kafka_connect_connect_worker_metrics_connector_paused_task_count
- kafka_connect_connect_worker_metrics_connector_running_task_count
- kafka_connect_connect_worker_metrics_connector_startup_failure_total
- kafka_connect_connect_worker_metrics_connector_startup_success_total
- kafka_connect_connect_worker_metrics_connector_total_task_count
- kafka_connect_connect_worker_metrics_connector_unassigned_task_count
- kafka_connect_connect_worker_metrics_task_count
- kafka_connect_connect_worker_metrics_task_startup_failure_total
- kafka_connect_connect_worker_metrics_task_startup_success_total
- kafka_connect_connect_worker_rebalance_metrics_rebalance_avg_time_ms
- kafka_connect_connect_worker_rebalance_metrics_time_since_last_rebalance_ms
- kafka_connect_connector_info
- kafka_connect_connector_metrics
- kafka_connect_connector_task_metrics_batch_size_avg
- kafka_connect_connector_task_metrics_batch_size_max
- kafka_connect_connector_task_metrics_offset_commit_avg_time_ms
- kafka_connect_connector_task_metrics_offset_commit_success_percentage
- kafka_connect_connector_task_metrics_pause_ratio
- kafka_connect_connector_task_metrics_running_ratio
- kafka_connect_sink_task_metrics_partition_count
- kafka_connect_sink_task_metrics_put_batch_avg_time_ms
- kafka_connect_sink_task_metrics_put_batch_max_time_ms
- kafka_connect_source_task_metrics_poll_batch_avg_time_ms
- kafka_connect_source_task_metrics_poll_batch_max_time_ms
- kafka_connect_source_task_metrics_source_record_active_count_avg
- kafka_connect_source_task_metrics_source_record_active_count_max
- kafka_connect_source_task_metrics_source_record_poll_rate
- kafka_connect_source_task_metrics_source_record_write_rate
- kafka_connect_task_error_metrics_deadletterqueue_produce_requests
- kafka_connect_task_error_metrics_total_errors_logged
- kafka_connect_task_error_metrics_total_record_errors
- kafka_connect_task_error_metrics_total_record_failures
- kafka_connect_task_error_metrics_total_records_skipped
- kafka_connect_task_error_metrics_total_retries
- kafka_consumer_lag_millis
- kafka_consumergroup_current_offset
- kafka_consumergroup_lag
- kafka_consumergroup_uncommitted_offsets
- kafka_controller_controllerstats_uncleanleaderelections_total
- kafka_controller_controllerstats_uncleanleaderelectionspersec
- kafka_controller_kafkacontroller_activecontrollercount
- kafka_controller_kafkacontroller_activecontrollercount_value
- kafka_controller_kafkacontroller_offlinepartitionscount
- kafka_controller_kafkacontroller_offlinepartitionscount_value
- kafka_controller_kafkacontroller_preferredreplicaimbalancecount
- kafka_controller_kafkacontroller_preferredreplicaimbalancecount_value
- kafka_log_log_logendoffset
- kafka_log_log_logstartoffset
- kafka_log_log_size
- kafka_network_requestmetrics_localtimems
- kafka_network_requestmetrics_localtimems_count
- kafka_network_requestmetrics_remotetimems
- kafka_network_requestmetrics_remotetimems_count
- kafka_network_requestmetrics_requestqueuetimems
- kafka_network_requestmetrics_requestqueuetimems_count
- kafka_network_requestmetrics_responsequeuetimems
- kafka_network_requestmetrics_responsequeuetimems_count
- kafka_network_requestmetrics_responsesendtimems
- kafka_network_requestmetrics_responsesendtimems_count
- kafka_schema_registry_jersey_metrics_request_latency_99
- kafka_schema_registry_jersey_metrics_request_rate
- kafka_schema_registry_jetty_metrics_connections_active
- kafka_schema_registry_registered_count
- kafka_schema_registry_schemas_created
- kafka_server_brokertopicmetrics_bytesin_total
- kafka_server_brokertopicmetrics_bytesinpersec
- kafka_server_brokertopicmetrics_bytesinpersec_count
- kafka_server_brokertopicmetrics_bytesout_total
- kafka_server_brokertopicmetrics_bytesoutpersec
- kafka_server_brokertopicmetrics_bytesoutpersec_count
- kafka_server_brokertopicmetrics_fetchmessageconversions_total
- kafka_server_brokertopicmetrics_fetchmessageconversionspersec
- kafka_server_brokertopicmetrics_fetchmessageconversionspersec_count
- kafka_server_brokertopicmetrics_messagesin_total
- kafka_server_brokertopicmetrics_messagesinpersec
- kafka_server_brokertopicmetrics_messagesinpersec_count
- kafka_server_brokertopicmetrics_producemessageconversions_total
- kafka_server_brokertopicmetrics_producemessageconversionspersec
- kafka_server_brokertopicmetrics_producemessageconversionspersec_count
- kafka_server_kafkaserver_brokerstate
- kafka_server_kafkaserver_total_brokerstate_value
- kafka_server_replicamanager_isrexpands_total
- kafka_server_replicamanager_isrexpandspersec
- kafka_server_replicamanager_isrshrinks_total
- kafka_server_replicamanager_isrshrinkspersec
- kafka_server_replicamanager_partitioncount
- kafka_server_replicamanager_total_isrexpandspersec_count
- kafka_server_replicamanager_total_isrshrinkspersec_count
- kafka_server_replicamanager_total_partitioncount_value
- kafka_server_sessionexpirelistener_zookeeperauthfailures_total
- kafka_server_sessionexpirelistener_zookeeperauthfailurespersec
- kafka_server_sessionexpirelistener_zookeeperdisconnects_total
- kafka_server_sessionexpirelistener_zookeeperdisconnectspersec
- kafka_server_sessionexpirelistener_zookeeperexpires_total
- kafka_server_sessionexpirelistener_zookeeperexpirespersec
- kafka_server_sessionexpirelistener_zookeepersyncconnects_total
- kafka_server_sessionexpirelistener_zookeepersyncconnectspersec
- kafka_server_zookeeperclientmetrics_zookeeperrequestlatencyms
- kafka_server_zookeeperclientmetrics_zookeeperrequestlatencyms_count
- kafka_streams_stream_state_metrics_delete_latency_avg
- kafka_streams_stream_state_metrics_delete_latency_max
- kafka_streams_stream_state_metrics_delete_rate
- kafka_streams_stream_state_metrics_fetch_latency_avg
- kafka_streams_stream_state_metrics_fetch_rate
- kafka_streams_stream_state_metrics_put_if_absent_latency_avg
- kafka_streams_stream_state_metrics_put_if_absent_latency_max
- kafka_streams_stream_state_metrics_put_if_absent_rate_rate
- kafka_streams_stream_state_metrics_put_latency_avg
- kafka_streams_stream_state_metrics_put_latency_max
- kafka_streams_stream_state_metrics_put_rate
- kafka_streams_stream_state_metrics_restore_latency_avg
- kafka_streams_stream_state_metrics_restore_latency_max
- kafka_streams_stream_state_metrics_restore_rate
- kafka_streams_stream_thread_metrics_commit_latency_avg
- kafka_streams_stream_thread_metrics_commit_latency_max
- kafka_streams_stream_thread_metrics_poll_latency_avg
- kafka_streams_stream_thread_metrics_poll_latency_max
- kafka_streams_stream_thread_metrics_process_latency_avg
- kafka_streams_stream_thread_metrics_process_latency_max
- kafka_streams_stream_thread_metrics_punctuate_latency_avg
- kafka_streams_stream_thread_metrics_punctuate_latency_max
- kafka_topic_partition_current_offset
- ksql_ksql_engine_query_stats_error_queries
- ksql_ksql_engine_query_stats_liveness_indicator
- ksql_ksql_engine_query_stats_messages_consumed_per_sec
- ksql_ksql_engine_query_stats_messages_produced_per_sec
- ksql_ksql_engine_query_stats_not_running_queries
- ksql_ksql_engine_query_stats_num_active_queries
- ksql_ksql_engine_query_stats_num_idle_queries
- ksql_ksql_engine_query_stats_num_persistent_queries
- ksql_ksql_engine_query_stats_pending_shutdown_queries
- ksql_ksql_engine_query_stats_rebalancing_queries
- ksql_ksql_engine_query_stats_running_queries
- ksql_ksql_metrics_ksql_queries_query_status
- max_latency
- min_latency
- num_alive_connections
- outstanding_requests
- process_cpu_seconds_total
- process_max_fds
- process_open_fds
- process_resident_memory_bytes
- process_start_time_seconds
- quorum_size
- up
- watch_count
- znode_count
- zookeeper_avgrequestlatency
- zookeeper_inmemorydatatree_nodecount
- zookeeper_inmemorydatatree_watchcount
- zookeeper_maxrequestlatency
- zookeeper_minrequestlatency
- zookeeper_numaliveconnections
- zookeeper_outstandingrequests
- zookeeper_status_quorumsize
- zookeeper_ticktime
Changelog
Cost
By connecting your Kafka instance to Grafana Cloud, you might incur charges. To view information on the number of active series that your Grafana Cloud account uses for metrics included in each Cloud tier, see Active series and dpm usage and Cloud tier pricing.