---
title: "Set up self-managed PostgreSQL | Database Observability documentation"
description: "Set up Database Observability for PostgreSQL using Grafana Alloy and send telemetry to Grafana Cloud."
---

> For a curated documentation index, see [llms.txt](/llms.txt). For the complete documentation index, see [llms-full.txt](/llms-full.txt).

# Set up self-managed PostgreSQL

Set up Database Observability with Grafana Cloud to collect telemetry from PostgreSQL using Grafana Alloy. You configure your database and Alloy to forward telemetry to Grafana Cloud.

## What you’ll achieve

In this article, you:

- Configure PostgreSQL for monitoring.
- Run Alloy with the public preview Database Observability components.
- Forward telemetry to Grafana Cloud.

## Before you begin

Review these requirements:

- PostgreSQL version 14.0 or later.

> Note
> 
> Alloy should connect directly to the database host. Avoid connecting Alloy to the database through a load balancer or connection pooler such as PgBouncer as it would limit Alloy’s ability to collect accurate telemetry.

## Set up the PostgreSQL database

Prepare PostgreSQL for monitoring and query introspection.

### Enable `pg_stat_statements`

Add `pg_stat_statements` to `shared_preload_libraries` in `postgresql.conf`. Restart PostgreSQL to apply the change.

Create the extension in each database you monitor:

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
-- repeat across all logical databases
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
```

Verify the extension:

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
-- check across all logical databases
SELECT * FROM pg_extension WHERE extname = 'pg_stat_statements';
```

### Set `compute_query_id`

Set `compute_query_id` to `on` and verify the setting:

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
SELECT setting FROM pg_settings WHERE name = 'compute_query_id';
```

Expected result: Value is `on`.

This value can be set in the `postgresql.conf` config file:

![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```none
# postgresql.conf
compute_query_id = on
```

### Set `pg_stat_statements.track`

Set `pg_stat_statements.track` to `all` and verify the setting:

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
show pg_stat_statements.track;
```

Expected result: Value is `all`.

This value can be set in the `postgresql.conf` config file:

![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```none
# postgresql.conf
pg_stat_statements.track = all
```

### Increase `track_activity_query_size`

Set `track_activity_query_size` to `4096` and verify the setting:

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
show track_activity_query_size;
```

Expected result: `4kB` (4096).

### Create a monitoring user and grant required privileges

Create the `db-o11y` user and grant base privileges:

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
CREATE USER "db-o11y" WITH PASSWORD '<DB_O11Y_PASSWORD>';
GRANT pg_monitor TO "db-o11y";
GRANT pg_read_all_stats TO "db-o11y";
```

Replace *&lt;DB\_O11Y\_PASSWORD&gt;* with a secure password for the `db-o11y` PostgreSQL user.

Verify that the user has the correct privileges to query `pg_stat_statements`:

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
-- run with the `db-o11y` user
SELECT * FROM pg_stat_statements LIMIT 1;
```

### Disable tracking of monitoring user queries

Prevent tracking of queries executed by the monitoring user itself:

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
ALTER ROLE "db-o11y" SET pg_stat_statements.track = 'none';
```

### Grant object privileges for detailed data

To allow collecting schema details and table information, connect to each logical database and grant access to each schema.

For example, for a `payments` database:

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
-- switch to the 'payments' database
\c payments

-- grant permissions in the 'public' schema
GRANT USAGE ON SCHEMA public TO "db-o11y";
GRANT SELECT ON ALL TABLES IN SCHEMA public TO "db-o11y";

-- grant permissions in the 'tests' schema
GRANT USAGE ON SCHEMA tests TO "db-o11y";
GRANT SELECT ON ALL TABLES IN SCHEMA tests TO "db-o11y";
```

Alternatively, if you’re unsure which specific schemas need access, use the predefined role to grant USAGE and SELECT access to all objects:

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
GRANT pg_read_all_data TO "db-o11y";
```

## Run and configure Alloy

Run Alloy and add the Database Observability configuration.

### Run the latest Alloy version

Alloy `1.16.0` or later is required for Database Observability. Find the latest stable version on [Docker Hub](https://hub.docker.com/r/grafana/alloy/tags). To update, refer to the [Alloy release notes](https://github.com/grafana/alloy/releases).

### Add the PostgreSQL configuration blocks

Add these blocks to Alloy for each PostgreSQL instance. Replace *&lt;DB\_NAME&gt;*. Create a `local.file` with the Data Source Name string, for example, `"postgresql://<DB_USER>:<DB_PASSWORD>@<DB_HOST>:<DB_PORT>/<DB_DATABASE>?sslmode=require"`:

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
local.file "postgres_secret_<DB_NAME>" {
  filename  = "/var/lib/alloy/postgres_secret_<DB_NAME>"
  is_secret = true
}

prometheus.exporter.postgres "postgres_<DB_NAME>" {
  data_source_names  = [local.file.postgres_secret_<DB_NAME>.content]
  enabled_collectors = ["stat_statements"]

  stat_statements {
    exclude_users      = ["db-o11y"]
    exclude_databases  = ["rdsadmin"]
  }

  autodiscovery {
    enabled = true

    // If running on AWS RDS, exclude the `rdsadmin` database
    database_denylist = ["rdsadmin"]
  }
}

database_observability.postgres "postgres_<DB_NAME>" {
  data_source_name  = local.file.postgres_secret_<DB_NAME>.content
  forward_to        = [loki.relabel.database_observability_postgres_<DB_NAME>.receiver]
  targets           = prometheus.exporter.postgres.postgres_<DB_NAME>.targets
  enable_collectors = ["query_details", "query_samples", "schema_details", "explain_plans"]
  exclude_users     = ["db-o11y"]
  exclude_databases = ["rdsadmin"]

  // OPTIONAL: provide additional information specific to the cloud provider
  // that hosts the database to enable certain infrastructure observability
  // features. See documentation of `database_observability.postgres` for
  // other cloud providers.
  cloud_provider {
    aws {
      arn = "<AWS_RDS_DB_ARN>"
    }
  }
}

loki.relabel "database_observability_postgres_<DB_NAME>" {
  forward_to = [loki.write.logs_service.receiver]

  // OPTIONAL: add any additional relabeling rules; must be consistent with rules in "discovery.relabel"
  rule {
    target_label = "instance"
    replacement  = "<INSTANCE_LABEL>"
  }
  rule {
    target_label = "<CUSTOM_LABEL_1>"
    replacement  = "<CUSTOM_VALUE_1>"
  }
}

discovery.relabel "database_observability_postgres_<DB_NAME>" {
  targets = database_observability.postgres.postgres_<DB_NAME>.targets

  rule {
    target_label = "job"
    replacement  = "integrations/db-o11y"
  }

  // OPTIONAL: add any additional relabeling rules; must be consistent with rules in "loki.relabel"
  // OPTIONAL: relabel `instance` to `dsn` before overwriting `instance`;
  // the `dsn` label is used in the integration with the knowledge graph
  rule {
    source_labels = ["instance"]
    target_label  = "dsn"
  }
  rule {
    target_label = "instance"
    replacement  = "<INSTANCE_LABEL>"
  }
  rule {
    target_label = "<CUSTOM_LABEL_1>"
    replacement  = "<CUSTOM_VALUE_1>"
  }
}

prometheus.scrape "database_observability_postgres_<DB_NAME>" {
  targets    = discovery.relabel.database_observability_postgres_<DB_NAME>.output
  forward_to = [prometheus.remote_write.metrics_service.receiver]
}
```

Replace the placeholders:

- `DB_NAME`: Database name Alloy uses in component identifiers (appears in component names and secret filenames).
- `AWS_RDS_DB_ARN`: Amazon RDS database ARN for cloud provider integration (optional).
- `INSTANCE_LABEL`: Value that sets the `instance` label on logs and metrics (optional).
- `CUSTOM_LABEL_1`, `CUSTOM_VALUE_1`: Optional custom label key and value you attach to logs and metrics.
- Secret file content example: `"postgresql://DB_USER:DB_PASSWORD@DB_HOST:DB_PORT/DB_DATABASE?sslmode=require"`.
  
  - `DB_USER`: Database user Alloy uses to connect (for example, `db-o11y`).
  - `DB_PASSWORD`: Password for the database user.
  - `DB_HOST`: Hostname or IP address of the database.
  - `DB_PORT`: Database port number.
  - `DB_DATABASE`: Logical database name in the DSN (recommended: use `postgres`).

Find more about the options supported by the `database_observability.postgres` component in the [reference](/docs/grafana-cloud/send-data/alloy/reference/components/database_observability/database_observability.postgres/) documentation.

### Add processing of PostgreSQL logs (optional)

Add processing of PostgreSQL logs to gather detailed metrics about query and server errors.

The `logs` collector processes PostgreSQL logs received through the `logs_receiver` entry point and exports Prometheus metrics for query and server errors.

#### Configure `log_line_prefix`

The database must be configured with the following `log_line_prefix` value:

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
-- Set log format (run as superuser)
ALTER SYSTEM SET log_line_prefix = '%m:%r:%u@%d:[%p]:%l:%e:%s:%v:%x:%c:%q%a:';

-- Reload configuration
SELECT pg_reload_conf();
```

Verify the that the setting has been applied correctly:

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
SHOW log_line_prefix;
```

Expected value: `%m:%r:%u@%d:[%p]:%l:%e:%s:%v:%x:%c:%q%a:`

Example log line:

text ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```text
2026-02-19 11:36:.767 GMT:172.18.0.3(35058):user@dbname:[151]:326:40001:2026-02-19 11:35:11 GMT:24/106:0:6996f56f.97:[unknown]ERROR:  could not serialize access due to concurrent update
```

#### Add logs processing configuration

Add the logs file processing configuration block:

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
loki.source.file "database_observability_postgres_<DB_NAME>" {
  targets = [{
    __path__ = "/var/log/postgresql/postgresql-*.log",
    job      = "postgres_logs",
  }]

  forward_to = [database_observability.postgres.postgres_<DB_NAME>.logs_receiver]
}
```

> Note
> 
> **Persistent storage:** The data path (`--storage.path`) must be persisted across restarts to maintain `loki.source.file` positions file.

### Add Prometheus and Loki write configuration

Add the Prometheus remote write and Loki write configuration. From Grafana Cloud, open your stack to get the URLs and generate API tokens:

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
prometheus.remote_write "metrics_service" {
  endpoint {
    url = sys.env("GCLOUD_HOSTED_METRICS_URL")

    basic_auth {
      password = sys.env("GCLOUD_RW_API_KEY")
      username = sys.env("GCLOUD_HOSTED_METRICS_ID")
    }
  }
}

loki.write "logs_service" {
  endpoint {
    url = sys.env("GCLOUD_HOSTED_LOGS_URL")

    basic_auth {
      password = sys.env("GCLOUD_RW_API_KEY")
      username = sys.env("GCLOUD_HOSTED_LOGS_ID")
    }
  }
}
```

Replace the placeholders:

- `GCLOUD_HOSTED_METRICS_URL`: Your Grafana Cloud Prometheus remote write URL.
- `GCLOUD_HOSTED_METRICS_ID`: Your Grafana Cloud Prometheus instance ID (username).
- `GCLOUD_HOSTED_LOGS_URL`: Your Grafana Cloud Loki write URL.
- `GCLOUD_HOSTED_LOGS_ID`: Your Grafana Cloud Loki instance ID (username).
- `GCLOUD_RW_API_KEY`: Grafana Cloud API token with write permissions.

## Run and configure Alloy with the Grafana Kubernetes Monitoring Helm chart

Extend your `values.yaml` when you use the k8s-monitoring Helm chart and set `databaseObservability.enabled` to true within the PostgreSQL integration.

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
integrations:
  postgresql:
    instances:
      - name: <DB_NAME>
        jobLabel: integrations/db-o11y
        exporter:
          dataSource:
            host: <DB_HOST>
            auth:
              usernameKey: <DB_USERNAME_SECRET_KEY>
              passwordKey: <DB_PASSWORD_SECRET_KEY>
          autoDiscovery:
            enabled: true
          collectors:
            statStatements:
              enabled: true
        databaseObservability:
          enabled: true
        secret:
          create: false
          name: <DB_NAME>
          namespace: postgresql
        logs:
          enabled: true
          labelSelectors:
            app.kubernetes.io/instance: <DB_NAME>
```

Replace the placeholders:

- `DB_NAME`: Database name Alloy uses in component identifiers (appears in component names and secrets).
- `DB_HOST`: Hostname or IP address of the database.
- `DB_USERNAME_SECRET_KEY`: Kubernetes secret key containing database user.
- `DB_PASSWORD_SECRET_KEY`: Kubernetes secret key containing database password.

To see the full set of values, check out the k8s-monitoring Helm chart [docs](https://github.com/grafana/k8s-monitoring-helm/blob/main/charts/k8s-monitoring/charts/feature-integrations/docs/integrations/postgresql.md#database-observability) or the [example configuration](https://github.com/grafana/k8s-monitoring-helm/tree/chore/metrics-enrichment-example/charts/k8s-monitoring/docs/examples/features/database-observability/postgresqlql)

## Next steps

For an overview of key concepts, refer to [Introduction to Database Observability](/docs/grafana-cloud/monitor-applications/database-observability/introduction/).
