Menu
Grafana Cloud

PgBouncer integration for Grafana Cloud

PgBouncer is a lightweight connection pooler for PostgreSQL databases. It reduces the processing time and resources required by managing a pool of connections to the database server, allowing applications to share a limited number of connections more efficiently.

This integration includes 4 useful alerts and 3 pre-built dashboards to help monitor and visualize PgBouncer metrics and logs.

Grafana Alloy configuration

Before you begin

PgBouncer

For the exporter to work with PgBouncer, the pgbouncer.ini config file needs to have the additional line ignore_startup_parameters = extra_float_digits appended to it. Once this additional line is added, the PgBouncer service must be restarted to take effect.

PgBouncer Prometheus exporter

This integration supports metrics and logs from a PgBouncer instance. It is configured to work with the PgBouncer Prometheus Exporter, which must be installed and configured separately from Grafana Alloy.

The exporter must be built before running using make build. The exporter itself can then be run using ./pgbouncer_exporter. The exporter can be tested by checking the default metrics endpoint of http://localhost:9127/metrics.

PgBouncer logs

PgBouncer does not send logs to a file by default, the user must enable this by adding an additional setting in the pgbouncer.ini file. An example configuration may look like this: logfile = /var/log/postgresql/pgbouncer.log.

The path to the log file that is customized in pgbouncer.ini must be the same as the path to the logs that is set in the agent config.

Install PgBouncer integration for Grafana Cloud

  1. In your Grafana Cloud stack, click Connections in the left-hand menu.
  2. Find PgBouncer and click its tile to open the integration.
  3. Review the prerequisites in the Configuration Details tab and set up Grafana Agent to send PgBouncer metrics and logs to your Grafana Cloud instance.
  4. Click Install to add this integration’s pre-built dashboards and alerts to your Grafana Cloud instance, and you can start monitoring your PgBouncer setup.

Configuration snippets for Grafana Alloy

Simple mode

These snippets are configured to scrape a single PgBouncer instance running locally with default ports.

First, manually copy and append the following snippets into your alloy configuration file.

Logs snippets

darwin

river
local.file_match "logs_integrations_integrations_pgbouncer" {
	path_targets = [{
		__address__       = "localhost",
		__path__          = "<path-to-user-set-pgbouncer-log>/pgbouncer.log",
		instance          = "<your-instance-name>:9127",
		job               = "integrations/pgbouncer",
		pgbouncer_cluster = "<your-pgbouncer-cluster-name>",
	}]
}

loki.process "logs_integrations_integrations_pgbouncer" {
	forward_to = []

	stage.multiline {
		firstline     = "\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}\\.\\d{3}"
		max_lines     = 0
		max_wait_time = "3s"
	}

	stage.regex {
		expression = "\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}\\.\\d{3} \\w+ \\[\\d+\\] (?P<level>LOG|ERROR|WARNING|INFO|DEBUG) .*"
	}

	stage.labels {
		values = {
			level = null,
		}
	}
}

loki.source.file "logs_integrations_integrations_pgbouncer" {
	targets    = local.file_match.logs_integrations_integrations_pgbouncer.targets
	forward_to = [loki.process.logs_integrations_integrations_pgbouncer.receiver]
}

linux

river
local.file_match "logs_integrations_integrations_pgbouncer" {
	path_targets = [{
		__address__       = "localhost",
		__path__          = "<path-to-user-set-pgbouncer-log>/pgbouncer.log",
		instance          = "<your-instance-name>:9127",
		job               = "integrations/pgbouncer",
		pgbouncer_cluster = "<your-pgbouncer-cluster-name>",
	}]
}

loki.process "logs_integrations_integrations_pgbouncer" {
	forward_to = []

	stage.multiline {
		firstline     = "\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}\\.\\d{3}"
		max_lines     = 0
		max_wait_time = "3s"
	}

	stage.regex {
		expression = "\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}\\.\\d{3} \\w+ \\[\\d+\\] (?P<level>LOG|ERROR|WARNING|INFO|DEBUG) .*"
	}

	stage.labels {
		values = {
			level = null,
		}
	}
}

loki.source.file "logs_integrations_integrations_pgbouncer" {
	targets    = local.file_match.logs_integrations_integrations_pgbouncer.targets
	forward_to = [loki.process.logs_integrations_integrations_pgbouncer.receiver]
}

windows

river
local.file_match "logs_integrations_integrations_pgbouncer" {
	path_targets = [{
		__address__       = "localhost",
		__path__          = "<path-to-user-set-pgbouncer-log>/pgbouncer.log",
		instance          = "<your-instance-name>:9127",
		job               = "integrations/pgbouncer",
		pgbouncer_cluster = "<your-pgbouncer-cluster-name>",
	}]
}

loki.process "logs_integrations_integrations_pgbouncer" {
	forward_to = []

	stage.multiline {
		firstline     = "\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}\\.\\d{3}"
		max_lines     = 0
		max_wait_time = "3s"
	}

	stage.regex {
		expression = "\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}\\.\\d{3} \\w+ \\[\\d+\\] (?P<level>LOG|ERROR|WARNING|INFO|DEBUG) .*"
	}

	stage.labels {
		values = {
			level = null,
		}
	}
}

loki.source.file "logs_integrations_integrations_pgbouncer" {
	targets    = local.file_match.logs_integrations_integrations_pgbouncer.targets
	forward_to = [loki.process.logs_integrations_integrations_pgbouncer.receiver]
}

Advanced mode

The following snippets provide examples to guide you through the configuration process.

To instruct Grafana Alloy to scrape your PgBouncer instances, manually copy and append the snippets to your alloy configuration file, then follow subsequent instructions.

Advanced metrics snippets

river
prometheus.scrape "metrics_integrations_integrations_pgbouncer" {
	targets = [{
		__address__       = "constants.hostname:9127",
		pgbouncer_cluster = "<your-pgbouncer-cluster-name>",
	}]
	forward_to = [prometheus.remote_write.metrics_service.receiver]
	job_name   = "integrations/pgbouncer"
}

To monitor your PgBouncer instance, you must use a discovery.relabel component to discover your PgBouncer Prometheus endpoint and apply appropriate labels, followed by a prometheus.scrape component to scrape it.

Configure the following properties within each discovery.relabel component:

  • __address__: The address to your PgBouncer Prometheus metrics endpoint.
  • instance label: constants.hostname sets the instance label to your Grafana Alloy server hostname. If that is not suitable, change it to a value uniquely identifies this PgBouncer instance. Make sure this label value is the same for all telemetry data collected for this instance.

If you have multiple PgBouncer servers to scrape, configure one discovery.relabel for each and scrape them by including each under targets within the prometheus.scrape component.

Advanced logs snippets

darwin

river
local.file_match "logs_integrations_integrations_pgbouncer" {
	path_targets = [{
		__address__       = "localhost",
		__path__          = "<path-to-user-set-pgbouncer-log>/pgbouncer.log",
		instance          = "<your-instance-name>:9127",
		job               = "integrations/pgbouncer",
		pgbouncer_cluster = "<your-pgbouncer-cluster-name>",
	}]
}

loki.process "logs_integrations_integrations_pgbouncer" {
	forward_to = []

	stage.multiline {
		firstline     = "\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}\\.\\d{3}"
		max_lines     = 0
		max_wait_time = "3s"
	}

	stage.regex {
		expression = "\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}\\.\\d{3} \\w+ \\[\\d+\\] (?P<level>LOG|ERROR|WARNING|INFO|DEBUG) .*"
	}

	stage.labels {
		values = {
			level = null,
		}
	}
}

loki.source.file "logs_integrations_integrations_pgbouncer" {
	targets    = local.file_match.logs_integrations_integrations_pgbouncer.targets
	forward_to = [loki.process.logs_integrations_integrations_pgbouncer.receiver]
}

To monitor your PgBouncer instance logs, you will use a combination of the following components:

  • local.file_match defines where to find the log file to be scraped. Change the following properties according to your environment:

    • __address__: The PgBouncer instance address
    • __path__: The path to the log file.
    • instance label: constants.hostname sets the instance label to your Grafana Alloy server hostname. If that is not suitable, change it to a value uniquely identifies this PgBouncer instance. Make sure this label value is the same for all telemetry data collected for this instance.
  • loki.process defines how to process logs before sending it to Loki.

  • loki.source.file sends logs to Loki.

linux

river
local.file_match "logs_integrations_integrations_pgbouncer" {
	path_targets = [{
		__address__       = "localhost",
		__path__          = "<path-to-user-set-pgbouncer-log>/pgbouncer.log",
		instance          = "<your-instance-name>:9127",
		job               = "integrations/pgbouncer",
		pgbouncer_cluster = "<your-pgbouncer-cluster-name>",
	}]
}

loki.process "logs_integrations_integrations_pgbouncer" {
	forward_to = []

	stage.multiline {
		firstline     = "\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}\\.\\d{3}"
		max_lines     = 0
		max_wait_time = "3s"
	}

	stage.regex {
		expression = "\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}\\.\\d{3} \\w+ \\[\\d+\\] (?P<level>LOG|ERROR|WARNING|INFO|DEBUG) .*"
	}

	stage.labels {
		values = {
			level = null,
		}
	}
}

loki.source.file "logs_integrations_integrations_pgbouncer" {
	targets    = local.file_match.logs_integrations_integrations_pgbouncer.targets
	forward_to = [loki.process.logs_integrations_integrations_pgbouncer.receiver]
}

To monitor your PgBouncer instance logs, you will use a combination of the following components:

  • local.file_match defines where to find the log file to be scraped. Change the following properties according to your environment:

    • __address__: The PgBouncer instance address
    • __path__: The path to the log file.
    • instance label: constants.hostname sets the instance label to your Grafana Alloy server hostname. If that is not suitable, change it to a value uniquely identifies this PgBouncer instance. Make sure this label value is the same for all telemetry data collected for this instance.
  • loki.process defines how to process logs before sending it to Loki.

  • loki.source.file sends logs to Loki.

windows

river
local.file_match "logs_integrations_integrations_pgbouncer" {
	path_targets = [{
		__address__       = "localhost",
		__path__          = "<path-to-user-set-pgbouncer-log>/pgbouncer.log",
		instance          = "<your-instance-name>:9127",
		job               = "integrations/pgbouncer",
		pgbouncer_cluster = "<your-pgbouncer-cluster-name>",
	}]
}

loki.process "logs_integrations_integrations_pgbouncer" {
	forward_to = []

	stage.multiline {
		firstline     = "\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}\\.\\d{3}"
		max_lines     = 0
		max_wait_time = "3s"
	}

	stage.regex {
		expression = "\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}\\.\\d{3} \\w+ \\[\\d+\\] (?P<level>LOG|ERROR|WARNING|INFO|DEBUG) .*"
	}

	stage.labels {
		values = {
			level = null,
		}
	}
}

loki.source.file "logs_integrations_integrations_pgbouncer" {
	targets    = local.file_match.logs_integrations_integrations_pgbouncer.targets
	forward_to = [loki.process.logs_integrations_integrations_pgbouncer.receiver]
}

To monitor your PgBouncer instance logs, you will use a combination of the following components:

  • local.file_match defines where to find the log file to be scraped. Change the following properties according to your environment:

    • __address__: The PgBouncer instance address
    • __path__: The path to the log file.
    • instance label: constants.hostname sets the instance label to your Grafana Alloy server hostname. If that is not suitable, change it to a value uniquely identifies this PgBouncer instance. Make sure this label value is the same for all telemetry data collected for this instance.
  • loki.process defines how to process logs before sending it to Loki.

  • loki.source.file sends logs to Loki.

Grafana Agent configuration

Before you begin

PgBouncer

For the exporter to work with PgBouncer, the pgbouncer.ini config file needs to have the additional line ignore_startup_parameters = extra_float_digits appended to it. Once this additional line is added, the PgBouncer service must be restarted to take effect.

PgBouncer Prometheus exporter

This integration supports metrics and logs from a PgBouncer instance. It is configured to work with the PgBouncer Prometheus Exporter, which must be installed and configured separately from the Grafana Agent.

The exporter must be built before running using make build. The exporter itself can then be run using ./pgbouncer_exporter. The exporter can be tested by checking the default metrics endpoint of http://localhost:9127/metrics.

PgBouncer logs

PgBouncer does not send logs to a file by default, the user must enable this by adding an additional setting in the pgbouncer.ini file. An example configuration may look like this: logfile = /var/log/postgresql/pgbouncer.log.

The path to the log file that is customized in pgbouncer.ini must be the same as the path to the logs that is set in the agent config.

Install PgBouncer integration for Grafana Cloud

  1. In your Grafana Cloud stack, click Connections in the left-hand menu.
  2. Find PgBouncer and click its tile to open the integration.
  3. Review the prerequisites in the Configuration Details tab and set up Grafana Agent to send PgBouncer metrics and logs to your Grafana Cloud instance.
  4. Click Install to add this integration’s pre-built dashboards and alerts to your Grafana Cloud instance, and you can start monitoring your PgBouncer setup.

Post-install configuration for the PgBouncer integration

Enable the integration by adding the provided snippets to your agent configuration file.

For the best dashboard experience and in order to see metrics and logs correlated ensure the following:

  • job and instance label values must match for pgbouncer (integrations) and logs in the Agent configuration file.
  • job label must be set to integrations/pgbouncer (already configured in the snippets).
  • instance label must be set to a value that uniquely identifies your PgBouncer instance. It is placed automatically by the config snippets.

Configuration snippets for Grafana Agent

Below metrics.configs.scrape_configs, insert the following lines and change the URLs according to your environment:

yaml
    - job_name: integrations/pgbouncer
      metrics_path: /metrics
      static_configs:
      - targets: [<your-instance-name>:9127]
        labels:
          pgbouncer_cluster: <your-pgbouncer-cluster-name>

Below logs.configs.scrape_configs, insert the following lines according to your environment.

yaml
    - job_name: integrations/pgbouncer
      static_configs:
        - targets: [localhost]
          labels:
            job: integrations/pgbouncer
            __path__: <path-to-user-set-pgbouncer-log>/pgbouncer.log
            pgbouncer_cluster: <your-pgbouncer-cluster-name>
            instance: <your-instance-name>:9127
      pipeline_stages:
      - multiline:
          firstline: '\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3}'
      - regex:
          expression: '\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3} \w+ \[\d+\] (?P<level>LOG|ERROR|WARNING|INFO|DEBUG) .*'
      - labels:
          level:

Full example configuration for Grafana Agent

Refer to the following Grafana Agent configuration for a complete example that contains all the snippets used for the PgBouncer integration. This example also includes metrics that are sent to monitor your Grafana Agent instance.

yaml
integrations:
  prometheus_remote_write:
  - basic_auth:
      password: <your_prom_pass>
      username: <your_prom_user>
    url: <your_prom_url>
  agent:
    enabled: true
    relabel_configs:
    - action: replace
      source_labels:
      - agent_hostname
      target_label: instance
    - action: replace
      target_label: job
      replacement: "integrations/agent-check"
    metric_relabel_configs:
    - action: keep
      regex: (prometheus_target_sync_length_seconds_sum|prometheus_target_scrapes_.*|prometheus_target_interval.*|prometheus_sd_discovered_targets|agent_build.*|agent_wal_samples_appended_total|process_start_time_seconds)
      source_labels:
      - __name__
  # Add here any snippet that belongs to the `integrations` section.
  # For a correct indentation, paste snippets copied from Grafana Cloud at the beginning of the line.
logs:
  configs:
  - clients:
    - basic_auth:
        password: <your_loki_pass>
        username: <your_loki_user>
      url: <your_loki_url>
    name: integrations
    positions:
      filename: /tmp/positions.yaml
    scrape_configs:
      # Add here any snippet that belongs to the `logs.configs.scrape_configs` section.
      # For a correct indentation, paste snippets copied from Grafana Cloud at the beginning of the line.
    - job_name: integrations/pgbouncer
      static_configs:
        - targets: [localhost]
          labels:
            job: integrations/pgbouncer
            __path__: <path-to-user-set-pgbouncer-log>/pgbouncer.log
            pgbouncer_cluster: <your-pgbouncer-cluster-name>
            instance: <your-instance-name>:9127
      pipeline_stages:
      - multiline:
          firstline: '\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3}'
      - regex:
          expression: '\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3} \w+ \[\d+\] (?P<level>LOG|ERROR|WARNING|INFO|DEBUG) .*'
      - labels:
          level:
metrics:
  configs:
  - name: integrations
    remote_write:
    - basic_auth:
        password: <your_prom_pass>
        username: <your_prom_user>
      url: <your_prom_url>
    scrape_configs:
      # Add here any snippet that belongs to the `metrics.configs.scrape_configs` section.
      # For a correct indentation, paste snippets copied from Grafana Cloud at the beginning of the line.
    - job_name: integrations/pgbouncer
      metrics_path: /metrics
      static_configs:
      - targets: [<your-instance-name>:9127]
        labels:
          pgbouncer_cluster: <your-pgbouncer-cluster-name>
  global:
    scrape_interval: 60s
  wal_directory: /tmp/grafana-agent-wal

Dashboards

The PgBouncer integration installs the following dashboards in your Grafana Cloud instance to help monitor your system.

  • PgBouncer cluster overview
  • PgBouncer logs
  • PgBouncer overview

PgBouncer cluster overview

PgBouncer cluster overview

PgBouncer overview

PgBouncer overview

PgBouncer overview (transaction)

PgBouncer overview (transaction)

Alerts

The PgBouncer integration includes the following useful alerts:

AlertDescription
PGBouncerHighNumberClientWaitingConnectionsWarning: May indicate a bottleneck in connection pooling where too many clients are waiting for available server connections.
PGBouncerHighClientWaitTimeWarning: Clients are experiencing significant delays, which could indicate issues with connection pool saturation or server performance.
PGBouncerHighServerConnectionSaturationWarningWarning: PGBouncer is nearing user connection capacity.
PGBouncerHighServerConnectionSaturationCriticalCritical: PGBouncer is nearing critical levels of user connection capacity.

Metrics

The most important metrics provided by the PgBouncer integration, which are used on the pre-built dashboards and Prometheus alerts, are as follows:

  • pgbouncer_config_max_client_connections
  • pgbouncer_config_max_user_connections
  • pgbouncer_databases_current_connections
  • pgbouncer_databases_max_connections
  • pgbouncer_pools_client_active_connections
  • pgbouncer_pools_client_maxwait_seconds
  • pgbouncer_pools_client_waiting_connections
  • pgbouncer_pools_server_active_connections
  • pgbouncer_pools_server_idle_connections
  • pgbouncer_pools_server_login_connections
  • pgbouncer_pools_server_testing_connections
  • pgbouncer_pools_server_used_connections
  • pgbouncer_stats_queries_duration_seconds_total
  • pgbouncer_stats_queries_pooled_total
  • pgbouncer_stats_received_bytes_total
  • pgbouncer_stats_sent_bytes_total
  • pgbouncer_stats_server_in_transaction_seconds_total
  • pgbouncer_stats_sql_transactions_pooled_total
  • up

Changelog

md
# 1.0.0 - March 2024

- Initial release

Cost

By connecting your PgBouncer instance to Grafana Cloud, you might incur charges. To view information on the number of active series that your Grafana Cloud account uses for metrics included in each Cloud tier, see Active series and dpm usage and Cloud tier pricing.