InfluxDB integration for Grafana Cloud
InfluxDB is a high-performance, open-source, time-series database system designed for handling, analyzing, and visualizing time-series data in real time. InfluxDB is commonly used in various industries, including DevOps and infrastructure monitoring, IoT applications, real-time analytics, and more.
This integration supports InfluxDB OSS 2.7.1+
This integration includes 6 useful alerts and 3 pre-built dashboards to help monitor and visualize InfluxDB metrics and logs.
Before you begin
Metrics
InfluxDB exposes a Prometheus metrics endpoint, /metrics, that is enabled by default.
To verify that this endpoint is enabled by running the following command on an InfluxDB node:
curl <your-hostname>:<your-influxdb-port>/metricsLogs
By default, InfluxDB logs to STDOUT. For Kubernetes and Docker, no additional configuration is required. To monitor InfluxDB logs on Linux, Darwin, or Windows platforms, configure logging to a file.
First, create a log file with proper permissions:
touch /path/to/influxdb.log
chown influxdb /path/to/influxdb.logWhen starting InfluxDB using the influxd daemon or a script, redirect STDOUT to a file on startup:
influxd 1> /path/to/influxdb.logWhen running InfluxDB using the service manager on Linux, modify the first line of the startup script at /usr/lib/influxdb/scripts/influxd-systemd-start.sh to look like this:
/usr/bin/influxd 1> /path/to/influxdb.log &For up-to-date information on logging to a file in InfluxDB, refer to this documentation.
Install InfluxDB integration for Grafana Cloud
- In your Grafana Cloud stack, click Connections in the left-hand menu.
- Find InfluxDB and click its tile to open the integration.
- Review the prerequisites in the Configuration Details tab and set up Grafana Alloy to send InfluxDB metrics and logs to your Grafana Cloud instance.
- Click Install to add this integration’s pre-built dashboards and alerts to your Grafana Cloud instance, and you can start monitoring your InfluxDB setup.
Configuration snippets for Grafana Alloy
Advanced mode
The following snippets provide examples to guide you through the configuration process.
To instruct Grafana Alloy to scrape your InfluxDB instances, manually copy and append the snippets to your alloy configuration file, then follow subsequent instructions.
Advanced metrics snippets
prometheus.scrape "metrics_integrations_integrations_influxdb" {
targets = [{
__address__ = "constants.hostname:<port>",
influxdb_cluster = "<your-cluster-name>",
instance = constants.hostname,
}]
forward_to = [prometheus.remote_write.metrics_service.receiver]
job_name = "integrations/influxdb"
}To monitor your InfluxDB instance, you must use a discovery.relabel component to discover your InfluxDB Prometheus endpoint and apply appropriate labels, followed by a prometheus.scrape component to scrape it.
Configure the following properties within each discovery.relabel component:
__address__: The address to your InfluxDB Prometheus metrics endpoint.instancelabel:constants.hostnamesets theinstancelabel to your Grafana Alloy server hostname. If that is not suitable, change it to a value uniquely identifies this InfluxDB instance. Make sure this label value is the same for all telemetry data collected for this instance.influxdb_clusterlabel must be set to a value that identifies your InfluxDB cluster.
If you have multiple InfluxDB servers to scrape, configure one discovery.relabel for each and scrape them by including each under targets within the prometheus.scrape component.
Advanced logs snippets
darwin
local.file_match "logs_integrations_integrations_influxdb" {
path_targets = [{
__address__ = "<hostname>",
__path__ = "/var/log/influxdb/influxdb.log",
influxdb_cluster = "<your-cluster-name>",
instance = constants.hostname,
job = "integrations/influxdb",
}]
}
loki.process "logs_integrations_integrations_influxdb" {
forward_to = [loki.write.grafana_cloud_loki.receiver]
stage.multiline {
firstline = "ts=\\d{4}"
max_lines = 0
max_wait_time = "3s"
}
stage.regex {
expression = "ts=(\\S+) lvl=(?P<level>\\w+) msg=.* log_id=(\\S+) (service=\"{0,1}(?P<service>\\S+) ){0,1}(engine=(?P<engine>\\S*) ){0,1}.*$"
}
stage.labels {
values = {
engine = null,
level = null,
service = null,
}
}
}
loki.source.file "logs_integrations_integrations_influxdb" {
targets = local.file_match.logs_integrations_integrations_influxdb.targets
forward_to = [loki.process.logs_integrations_integrations_influxdb.receiver]
}To monitor your InfluxDB instance logs, you will use a combination of the following components:
local.file_match defines where to find the log file to be scraped. Change the following properties according to your environment:
__address__: The InfluxDB instance address__path__: The path to the log file.instancelabel:constants.hostnamesets theinstancelabel to your Grafana Alloy server hostname. If that is not suitable, change it to a value uniquely identifies this InfluxDB instance. Make sure this label value is the same for all telemetry data collected for this instance.influxdb_clusterlabel must be set to a value that identifies your InfluxDB cluster.
loki.process defines how to process logs before sending it to Loki.
loki.source.file sends logs to Loki.
linux
local.file_match "logs_integrations_integrations_influxdb" {
path_targets = [{
__address__ = "<hostname>",
__path__ = "/var/log/influxdb/influxdb.log",
influxdb_cluster = "<your-cluster-name>",
instance = constants.hostname,
job = "integrations/influxdb",
}]
}
loki.process "logs_integrations_integrations_influxdb" {
forward_to = [loki.write.grafana_cloud_loki.receiver]
stage.multiline {
firstline = "ts=\\d{4}"
max_lines = 0
max_wait_time = "3s"
}
stage.regex {
expression = "ts=(\\S+) lvl=(?P<level>\\w+) msg=.* log_id=(\\S+) (service=\"{0,1}(?P<service>\\S+) ){0,1}(engine=(?P<engine>\\S*) ){0,1}.*$"
}
stage.labels {
values = {
engine = null,
level = null,
service = null,
}
}
}
loki.source.file "logs_integrations_integrations_influxdb" {
targets = local.file_match.logs_integrations_integrations_influxdb.targets
forward_to = [loki.process.logs_integrations_integrations_influxdb.receiver]
}To monitor your InfluxDB instance logs, you will use a combination of the following components:
local.file_match defines where to find the log file to be scraped. Change the following properties according to your environment:
__address__: The InfluxDB instance address__path__: The path to the log file.instancelabel:constants.hostnamesets theinstancelabel to your Grafana Alloy server hostname. If that is not suitable, change it to a value uniquely identifies this InfluxDB instance. Make sure this label value is the same for all telemetry data collected for this instance.influxdb_clusterlabel must be set to a value that identifies your InfluxDB cluster.
loki.process defines how to process logs before sending it to Loki.
loki.source.file sends logs to Loki.
windows
local.file_match "logs_integrations_integrations_influxdb" {
path_targets = [{
__address__ = "<hostname>",
__path__ = "/var/log/influxdb/influxdb.log",
influxdb_cluster = "<your-cluster-name>",
instance = constants.hostname,
job = "integrations/influxdb",
}]
}
loki.process "logs_integrations_integrations_influxdb" {
forward_to = [loki.write.grafana_cloud_loki.receiver]
stage.multiline {
firstline = "ts=\\d{4}"
max_lines = 0
max_wait_time = "3s"
}
stage.regex {
expression = "ts=(\\S+) lvl=(?P<level>\\w+) msg=.* log_id=(\\S+) (service=\"{0,1}(?P<service>\\S+) ){0,1}(engine=(?P<engine>\\S*) ){0,1}.*$"
}
stage.labels {
values = {
engine = null,
level = null,
service = null,
}
}
}
loki.source.file "logs_integrations_integrations_influxdb" {
targets = local.file_match.logs_integrations_integrations_influxdb.targets
forward_to = [loki.process.logs_integrations_integrations_influxdb.receiver]
}To monitor your InfluxDB instance logs, you will use a combination of the following components:
local.file_match defines where to find the log file to be scraped. Change the following properties according to your environment:
__address__: The InfluxDB instance address__path__: The path to the log file.instancelabel:constants.hostnamesets theinstancelabel to your Grafana Alloy server hostname. If that is not suitable, change it to a value uniquely identifies this InfluxDB instance. Make sure this label value is the same for all telemetry data collected for this instance.influxdb_clusterlabel must be set to a value that identifies your InfluxDB cluster.
loki.process defines how to process logs before sending it to Loki.
loki.source.file sends logs to Loki.
Dashboards
The InfluxDB integration installs the following dashboards in your Grafana Cloud instance to help monitor your system.
- InfluxDB cluster overview
- InfluxDB instance overview
- InfluxDB logs overview
InfluxDB cluster overview (queries)

InfluxDB cluster overview (tasks)

InfluxDB cluster overview (Go)

Alerts
The InfluxDB integration includes the following useful alerts:
Metrics
The most important metrics provided by the InfluxDB integration, which are used on the pre-built dashboards and Prometheus alerts, are as follows:
- boltdb_reads_total
- boltdb_writes_total
- go_gc_duration_seconds_sum
- go_memstats_gc_cpu_fraction
- go_memstats_heap_alloc_bytes
- go_memstats_heap_idle_bytes
- go_memstats_last_gc_time_seconds
- go_threads
- http_api_request_duration_seconds_sum
- http_api_requests_total
- http_query_request_bytes
- http_query_request_count
- http_query_response_bytes
- http_write_request_bytes
- http_write_request_count
- http_write_response_bytes
- influxdb_buckets_total
- influxdb_dashboards_total
- influxdb_remotes_total
- influxdb_replications_total
- influxdb_scrapers_total
- influxdb_uptime_seconds
- influxdb_users_total
- influxql_service_executing_duration_seconds_sum
- influxql_service_requests_total
- qc_compiling_active
- qc_executing_active
- qc_queueing_active
- task_executor_total_runs_active
- task_executor_workers_busy
- task_scheduler_current_execution
- task_scheduler_total_execute_failure
- task_scheduler_total_execution_calls
- task_scheduler_total_schedule_calls
- task_scheduler_total_schedule_fails
- up
Changelog
# 1.0.1 - November 2024
- Update status panel check queries
# 1.0.0 - January 2023
- Initial releaseCost
By connecting your InfluxDB instance to Grafana Cloud, you might incur charges. To view information on the number of active series that your Grafana Cloud account uses for metrics included in each Cloud tier, see Active series and dpm usage and Cloud tier pricing.



