Menu
Grafana Cloud

Apache CouchDB integration for Grafana Cloud

Apache CouchDB is a NoSQL document-oriented database system known for its scalability, availability, and easy replication of data across multiple servers. This integration for Grafana Cloud allows users to collect metrics and system logs for monitoring an Apache CouchDB instance or clustered deployment. This integration also includes useful visualizations for both cluster and node metrics such as open databases, database writes/reads, request latency, request rates, response statuses, and replicator failure info.

This integration supports Apache CouchDB versions 3.2.x+.

This integration includes 10 useful alerts and 2 pre-built dashboards to help monitor and visualize Apache CouchDB metrics and logs.

Grafana Alloy configuration

Before you begin

In order for the integration to properly work, one of two configurations changes must occur. Either a user must be given metric permissions or the unauthenticated Prometheus endpoint must be setup.

Granting metrics permissions to a user

If an admin user and password is not planned on being used in the metric configuration, a CouchDB user must instead be given the _metrics role when creating a new user.

Example

curl http://localhost:5984/_users/org.couchdb.user:prom_user \
  -X PUT \
  -u admin:password \
  -H "Content-Type: application/json" \
  -d '{"name":"prom_user", "password":"prom_password", "roles": ["_metrics"], "type": "user"}'

Configuring the unauthenticated Prometheus endpoint

To enable the unauthenticated Prometheus endpoint for each node, CouchDB’s configuration file local.ini must be updated to include the correct Prometheus configuration.

Example

[prometheus]
additional_port = true
bind_address = 127.0.0.1
port = 17986

Install Apache CouchDB integration for Grafana Cloud

  1. In your Grafana Cloud stack, click Connections in the left-hand menu.
  2. Find Apache CouchDB and click its tile to open the integration.
  3. Review the prerequisites in the Configuration Details tab and set up Grafana Agent to send Apache CouchDB metrics and logs to your Grafana Cloud instance.
  4. Click Install to add this integration’s pre-built dashboards and alerts to your Grafana Cloud instance, and you can start monitoring your Apache CouchDB setup.

Configuration snippets for Grafana Alloy

Advanced mode

The following snippets provide examples to guide you through the configuration process.

To instruct Grafana Alloy to scrape your Apache CouchDB instances, copy and paste the snippets to your configuration file and follow subsequent instructions.

Advanced metrics snippets

river
discovery.relabel "metrics_integrations_integrations_apache_couchdb" {
	targets = concat(
		[{
			__address__ = "<your-node-hostname1>:5984",
		}],
		[{
			__address__ = "<your-node-hostname2>:5984",
		}],
		[{
			__address__ = "<your-node-hostname3>:5984",
		}],
	)

	rule {
		target_label = "couchdb_cluster"
		replacement  = "<your-cluster-name>"
	}

	rule {
		target_label = "instance"
		replacement  = constants.hostname
	}
}

prometheus.scrape "metrics_integrations_integrations_apache_couchdb" {
	targets      = discovery.relabel.metrics_integrations_integrations_apache_couchdb.output
	forward_to   = [prometheus.remote_write.metrics_service.receiver]
	job_name     = "integrations/apache-couchdb"
	metrics_path = "/_node/_local/_prometheus"

	basic_auth {
		username = "<couchdb_user>"
		password = "<couchdb_password>"
	}
}

To monitor your Apache CouchDB instance, you must use a discovery.relabel component to discover your Apache CouchDB Prometheus endpoint and apply appropriate labels, followed by a prometheus.scrape component to scrape it.

Configure the following properties within each discovery.relabel component:

  • __address__: The address to your Apache CouchDB Prometheus metrics endpoint. The ports for the targets should be changed based on if you are using the authenticated endpoint (default 5984) or the unauthenticated Prometheus endpoint (default 17986).
  • instance label: constants.hostname sets the instance label to your Grafana Alloy server hostname. If that is not suitable, change it to a value uniquely identifies this Apache CouchDB instance. Make sure this label value is the same for all telemetry data collected for this instance.
  • couchdb_cluster: The couchdb_cluster label to group your Apache CouchDB instances within a cluster. Set the same value for all nodes within your cluster.

If you have multiple Apache CouchDB servers to scrape, configure one discovery.relabel for each and scrape them by including each under targets within the prometheus.scrape component.

Beware that the prometheus.scrape component must hold the auth information if your are running an autheticated Prometheus endpoint. Check the component documentation for the different auth options.

Advanced logs snippets

darwin

river
local.file_match "logs_integrations_integrations_apache_couchdb" {
	path_targets = [{
		__address__     = "localhost",
		__path__        = "/var/log/couchdb/couchdb.log",
		couchdb_cluster = "<your-cluster-name>",
		instance        = constants.hostname,
		job             = "integrations/apache-couchdb",
	}]
}

loki.process "logs_integrations_integrations_apache_couchdb" {
	forward_to = [loki.write.grafana_cloud_loki.receiver]

	stage.multiline {
		firstline     = "\\[[a-z]+\\] \\d+-\\d+-\\d+T\\d+:\\d+:\\d+\\.\\d+"
		max_lines     = 0
		max_wait_time = "3s"
	}
}

loki.source.file "logs_integrations_integrations_apache_couchdb" {
	targets    = local.file_match.logs_integrations_integrations_apache_couchdb.targets
	forward_to = [loki.process.logs_integrations_integrations_apache_couchdb.receiver]
}

To monitor your Apache CouchDB instance logs, you will use a combination of the following components:

  • local.file_match defines where to find the log file to be scraped. Change the following properties according to your environment:
    • __address__: The Apache CouchDB instance address
    • __path__: The path to the log file.
    • instance label: constants.hostname sets the instance label to your Grafana Alloy server hostname. If that is not suitable, change it to a value uniquely identifies this Apache CouchDB instance. Make sure this label value is the same for all telemetry data collected for this instance.
    • couchdb_cluster: The couchdb_cluster label to group your Apache CouchDB instances within a cluster. Set the same value for all nodes within your cluster.
  • loki.process defines how to process logs before sending it to Loki.
  • loki.source.file sends logs to Loki.

On Linux, you will also need to add the alloy user to the couchdb group to get logs. Run the following command to configure the user as required:

sh
sudo usermod -a -G couchdb alloy

linux

river
local.file_match "logs_integrations_integrations_apache_couchdb" {
	path_targets = [{
		__address__     = "localhost",
		__path__        = "/var/log/couchdb/couchdb.log",
		couchdb_cluster = "<your-cluster-name>",
		instance        = constants.hostname,
		job             = "integrations/apache-couchdb",
	}]
}

loki.process "logs_integrations_integrations_apache_couchdb" {
	forward_to = [loki.write.grafana_cloud_loki.receiver]

	stage.multiline {
		firstline     = "\\[[a-z]+\\] \\d+-\\d+-\\d+T\\d+:\\d+:\\d+\\.\\d+"
		max_lines     = 0
		max_wait_time = "3s"
	}
}

loki.source.file "logs_integrations_integrations_apache_couchdb" {
	targets    = local.file_match.logs_integrations_integrations_apache_couchdb.targets
	forward_to = [loki.process.logs_integrations_integrations_apache_couchdb.receiver]
}

To monitor your Apache CouchDB instance logs, you will use a combination of the following components:

  • local.file_match defines where to find the log file to be scraped. Change the following properties according to your environment:
    • __address__: The Apache CouchDB instance address
    • __path__: The path to the log file.
    • instance label: constants.hostname sets the instance label to your Grafana Alloy server hostname. If that is not suitable, change it to a value uniquely identifies this Apache CouchDB instance. Make sure this label value is the same for all telemetry data collected for this instance.
    • couchdb_cluster: The couchdb_cluster label to group your Apache CouchDB instances within a cluster. Set the same value for all nodes within your cluster.
  • loki.process defines how to process logs before sending it to Loki.
  • loki.source.file sends logs to Loki.

On Linux, you will also need to add the alloy user to the couchdb group to get logs. Run the following command to configure the user as required:

sh
sudo usermod -a -G couchdb alloy

windows

river
local.file_match "logs_integrations_integrations_apache_couchdb" {
	path_targets = [{
		__address__     = "localhost",
		__path__        = "/Program Files/Apache Software Foundation/CouchDB/var/log/couchdb.log",
		couchdb_cluster = "<your-cluster-name>",
		instance        = constants.hostname,
		job             = "integrations/apache-couchdb",
	}]
}

loki.process "logs_integrations_integrations_apache_couchdb" {
	forward_to = [loki.write.grafana_cloud_loki.receiver]

	stage.multiline {
		firstline     = "\\[[a-z]+\\] \\d+-\\d+-\\d+T\\d+:\\d+:\\d+\\.\\d+"
		max_lines     = 0
		max_wait_time = "3s"
	}
}

loki.source.file "logs_integrations_integrations_apache_couchdb" {
	targets    = local.file_match.logs_integrations_integrations_apache_couchdb.targets
	forward_to = [loki.process.logs_integrations_integrations_apache_couchdb.receiver]
}

To monitor your Apache CouchDB instance logs, you will use a combination of the following components:

  • local.file_match defines where to find the log file to be scraped. Change the following properties according to your environment:
    • __address__: The Apache CouchDB instance address
    • __path__: The path to the log file.
    • instance label: constants.hostname sets the instance label to your Grafana Alloy server hostname. If that is not suitable, change it to a value uniquely identifies this Apache CouchDB instance. Make sure this label value is the same for all telemetry data collected for this instance.
    • couchdb_cluster: The couchdb_cluster label to group your Apache CouchDB instances within a cluster. Set the same value for all nodes within your cluster.
  • loki.process defines how to process logs before sending it to Loki.
  • loki.source.file sends logs to Loki.
Grafana Agent configuration

Before you begin

In order for the integration to properly work, one of two configurations changes must occur. Either a user must be given metric permissions or the unauthenticated Prometheus endpoint must be setup.

Granting metrics permissions to a user

If an admin user and password is not planned on being used in the metric configuration, a CouchDB user must instead be given the _metrics role when creating a new user.

Example

curl http://localhost:5984/_users/org.couchdb.user:prom_user \
  -X PUT \
  -u admin:password \
  -H "Content-Type: application/json" \
  -d '{"name":"prom_user", "password":"prom_password", "roles": ["_metrics"], "type": "user"}'

Configuring the unauthenticated Prometheus endpoint

To enable the unauthenticated Prometheus endpoint for each node, CouchDB’s configuration file local.ini must be updated to include the correct Prometheus configuration.

Example

[prometheus]
additional_port = true
bind_address = 127.0.0.1
port = 17986

Install Apache CouchDB integration for Grafana Cloud

  1. In your Grafana Cloud stack, click Connections in the left-hand menu.
  2. Find Apache CouchDB and click its tile to open the integration.
  3. Review the prerequisites in the Configuration Details tab and set up Grafana Agent to send Apache CouchDB metrics and logs to your Grafana Cloud instance.
  4. Click Install to add this integration’s pre-built dashboards and alerts to your Grafana Cloud instance, and you can start monitoring your Apache CouchDB setup.

Post-install configuration for the Apache CouchDB integration

After enabling the metrics generation, instruct the Grafana Agent to scrape your Apache CouchDB nodes.

If the unauthenticated Prometheus endpoint is being used for CouchDB, basic_auth should be removed from your metrics scrape_configs.

If using the authenticated endpoint for CouchDB, replace the username and password values with the correct CouchDB user and password.

If CouchDB requires any aditional security measures to connect to the authenticated prometheus endpoint, then make sure to add any additional auth configuration to the metrics scrape_configs.

Make sure to change targets in the snippet according to your environment. The ports for the targets should be changed based on if you are using the authenticated endpoint (default 5984) or the unauthenticated Prometheus endpoint (default 17986).

You must also configure a custom label for this integration that much be attached to each of the targets:

  • couchdb_cluster, the value that identifies a Apache CouchDB cluster

You can define a cluster label by adding an extra label to the scrape_configs of the metric configuration.

For the logs section, ensure the file path of the CouchDB system log is correct.

If you want to show logs and metrics signals correlated in your dashboards as a single pane of glass, ensure the following:

  • job and instance label values must match for the Apache CouchDB integration and logs scrape config in your agent configuration file.
  • job must be set to integrations/apache-couchdb
  • instance label must be set to a value that uniquely identifies your Apache CouchDB node. Please replace the default hostname value according to your environment - it should be set manually. Note that if you use localhost for multiple nodes, the dashboards will not be able to filter correctly by instance.
  • couchdb_cluster must be the value that identifies the Apache CouchDB cluster this node belongs to.

On Linux, you will also need to add the grafana-agent user to the couchdb group to get logs. Run the following command to configure the user as required:

sh
sudo usermod -a -G couchdb grafana-agent

Configuration snippets for Grafana Agent

Below metrics.configs.scrape_configs, insert the following lines and change the URLs according to your environment:

yaml
    - job_name: integrations/apache-couchdb
      metrics_path: /_node/_local/_prometheus
      basic_auth:
        username: '<couchdb_user>'
        password: '<couchdb_password>'
      static_configs:
        - targets: ['<your-node-hostname1>:5984']
        - targets: ['<your-node-hostname2>:5984']
        - targets: ['<your-node-hostname3>:5984']
      relabel_configs:
        - target_label: couchdb_cluster
          replacement: '<your-cluster-name>'
        - target_label: instance
          replacement: '<your-instance-name>'

Below logs.configs.scrape_configs, insert the following lines according to your environment.

yaml
    - job_name: integrations/apache-couchdb
      static_configs:
        - targets: [localhost]
          labels:
            job: integrations/apache-couchdb
            couchdb_cluster: '<your-cluster-name>'
            instance: '<your-instance-name>'
            # For a Windows installation this path should be changed to:
            # /Program Files/Apache Software Foundation/CouchDB/var/log/couchdb.log
            __path__: /var/log/couchdb/couchdb.log
      pipeline_stages:
        - multiline:
            # match on severity date like '[notice] 2022-01-01T12:12:12.11111'
            # but feel free to modify to match your logs
            firstline: '\[[a-z]+\] \d+-\d+-\d+T\d+:\d+:\d+\.\d+'

Full example configuration for Grafana Agent

Refer to the following Grafana Agent configuration for a complete example that contains all the snippets used for the Apache CouchDB integration. This example also includes metrics that are sent to monitor your Grafana Agent instance.

yaml
integrations:
  prometheus_remote_write:
  - basic_auth:
      password: <your_prom_pass>
      username: <your_prom_user>
    url: <your_prom_url>
  agent:
    enabled: true
    relabel_configs:
    - action: replace
      source_labels:
      - agent_hostname
      target_label: instance
    - action: replace
      target_label: job
      replacement: "integrations/agent-check"
    metric_relabel_configs:
    - action: keep
      regex: (prometheus_target_sync_length_seconds_sum|prometheus_target_scrapes_.*|prometheus_target_interval.*|prometheus_sd_discovered_targets|agent_build.*|agent_wal_samples_appended_total|process_start_time_seconds)
      source_labels:
      - __name__
  # Add here any snippet that belongs to the `integrations` section.
  # For a correct indentation, paste snippets copied from Grafana Cloud at the beginning of the line.
logs:
  configs:
  - clients:
    - basic_auth:
        password: <your_loki_pass>
        username: <your_loki_user>
      url: <your_loki_url>
    name: integrations
    positions:
      filename: /tmp/positions.yaml
    scrape_configs:
      # Add here any snippet that belongs to the `logs.configs.scrape_configs` section.
      # For a correct indentation, paste snippets copied from Grafana Cloud at the beginning of the line.
    - job_name: integrations/apache-couchdb
      static_configs:
        - targets: [localhost]
          labels:
            job: integrations/apache-couchdb
            couchdb_cluster: '<your-cluster-name>'
            instance: '<your-instance-name>'
            # For a Windows installation this path should be changed to:
            # /Program Files/Apache Software Foundation/CouchDB/var/log/couchdb.log
            __path__: /var/log/couchdb/couchdb.log
      pipeline_stages:
        - multiline:
            # match on severity date like '[notice] 2022-01-01T12:12:12.11111'
            # but feel free to modify to match your logs
            firstline: '\[[a-z]+\] \d+-\d+-\d+T\d+:\d+:\d+\.\d+'
metrics:
  configs:
  - name: integrations
    remote_write:
    - basic_auth:
        password: <your_prom_pass>
        username: <your_prom_user>
      url: <your_prom_url>
    scrape_configs:
      # Add here any snippet that belongs to the `metrics.configs.scrape_configs` section.
      # For a correct indentation, paste snippets copied from Grafana Cloud at the beginning of the line.
    - job_name: integrations/apache-couchdb
      metrics_path: /_node/_local/_prometheus
      basic_auth:
        username: '<couchdb_user>'
        password: '<couchdb_password>'
      static_configs:
        - targets: ['<your-node-hostname1>:5984']
        - targets: ['<your-node-hostname2>:5984']
        - targets: ['<your-node-hostname3>:5984']
      relabel_configs:
        - target_label: couchdb_cluster
          replacement: '<your-cluster-name>'
        - target_label: instance
          replacement: '<your-instance-name>'
  global:
    scrape_interval: 60s
  wal_directory: /tmp/grafana-agent-wal

Dashboards

The Apache CouchDB integration installs the following dashboards in your Grafana Cloud instance to help monitor your system.

  • Apache CouchDB nodes
  • Apache CouchDB overview

Apache CouchDB overview (1/2)

Apache CouchDB overview (1/2)

Apache CouchDB overview (2/2)

Apache CouchDB overview (2/2)

Apache CouchDB nodes (1/2)

Apache CouchDB nodes (1/2)

Alerts

The Apache CouchDB integration includes the following useful alerts:

AlertDescription
CouchDBUnhealthyClusterCritical: At least one of the nodes in a cluster is reporting the cluster as being unstable.
CouchDBHigh4xxResponseCodesWarning: There are a high number of 4xx responses for incoming requests to a node.
CouchDBHigh5xxResponseCodesCritical: There are a high number of 5xx responses for incoming requests to a node.
CouchDBModerateRequestLatencyWarning: There is a moderate level of request latency for a node.
CouchDBHighRequestLatencyCritical: There is a high level of request latency for a node.
CouchDBManyReplicatorJobsPendingWarning: There is a high number of replicator jobs pending for a node.
CouchDBReplicatorJobsCrashingCritical: There are replicator jobs crashing for a node.
CouchDBReplicatorChangesQueuesDyingWarning: There are replicator changes queue process deaths for a node.
CouchDBReplicatorConnectionOwnersCrashingWarning: There are replicator connection owner process crashes for a node.
CouchDBReplicatorConnectionWorkersCrashingWarning: There are replicator connection worker process crashes for a node.

Metrics

The most important metrics provided by the Apache CouchDB integration, which are used on the pre-built dashboards and Prometheus alerts, are as follows:

  • couchdb_couch_log_requests_total
  • couchdb_couch_replicator_changes_manager_deaths_total
  • couchdb_couch_replicator_changes_queue_deaths_total
  • couchdb_couch_replicator_changes_reader_deaths_total
  • couchdb_couch_replicator_cluster_is_stable
  • couchdb_couch_replicator_connection_owner_crashes_total
  • couchdb_couch_replicator_connection_worker_crashes_total
  • couchdb_couch_replicator_jobs_crashes_total
  • couchdb_couch_replicator_jobs_pending
  • couchdb_database_reads_total
  • couchdb_database_writes_total
  • couchdb_erlang_memory_bytes
  • couchdb_httpd_bulk_requests_total
  • couchdb_httpd_request_methods
  • couchdb_httpd_status_codes
  • couchdb_httpd_temporary_view_reads_total
  • couchdb_httpd_view_reads_total
  • couchdb_httpd_view_timeouts_total
  • couchdb_open_databases_total
  • couchdb_open_os_files_total
  • couchdb_request_time_seconds
  • couchdb_request_time_seconds_count
  • couchdb_request_time_seconds_sum
  • up

Changelog

md
# 0.0.3 - September 2023

* New Filter Metrics option for configuring the Grafana Agent, which saves on metrics cost by dropping any metric not used by this integration. Beware that anything custom built using metrics that are not on the snippet will stop working.
* New hostname relabel option, which applies the instance name you write on the text box to the Grafana Agent configuration snippets, making it easier and less error prone to configure this mandatory label.

# 0.0.2 - August 2023

* Add regex filter for logs datasource

# 0.0.1 - April 2023

* Initial release

Cost

By connecting your Apache CouchDB instance to Grafana Cloud, you might incur charges. To view information on the number of active series that your Grafana Cloud account uses for metrics included in each Cloud tier, see Active series and dpm usage and Cloud tier pricing.