---
title: "Set up AWS RDS PostgreSQL | Database Observability documentation"
description: "Set up Database Observability for AWS RDS PostgreSQL using Grafana Alloy and send telemetry to Grafana Cloud."
---

# Set up AWS RDS PostgreSQL

Set up Database Observability with Grafana Cloud to collect telemetry from AWS RDS PostgreSQL instances using Grafana Alloy. You configure your RDS instance and Alloy to forward telemetry to Grafana Cloud.

## What you’ll achieve

In this article, you:

- Configure RDS PostgreSQL parameter groups for monitoring.
- Create monitoring users with required privileges.
- Configure Alloy with the Database Observability components.
- Forward telemetry to Grafana Cloud.

## Before you begin

Review these requirements:

- RDS PostgreSQL 14.0 or later.
- Access to modify RDS parameter groups.
- Grafana Alloy deployed and accessible to your RDS instance.
- Network connectivity between Alloy and your RDS instance endpoint.

For general PostgreSQL setup concepts, refer to [Set up PostgreSQL](/docs/grafana-cloud/monitor-applications/database-observability/set-up/postgres/).

> Note
> 
> Alloy should connect directly to the database host. Avoid connecting Alloy to the database through a load balancer or connection pooler such as PgBouncer as it would limit Alloy’s ability to collect accurate telemetry.

## Configure the DB parameter group

Enable `pg_stat_statements` and configure query tracking by adding parameters to your RDS PostgreSQL parameter group. These parameters require an instance restart to take effect.

### Required parameters

Expand table

| Parameter                   | Value                | Notes            |
|-----------------------------|----------------------|------------------|
| `shared_preload_libraries`  | `pg_stat_statements` | Requires restart |
| `pg_stat_statements.track`  | `all`                | Requires restart |
| `track_activity_query_size` | `4096`               | Requires restart |

### Use the Amazon RDS console

1. Open the **RDS Console** and navigate to **Parameter groups**.
2. Create a new parameter group or modify an existing one with family `postgres14`.
3. Set the parameters listed above.
4. Apply the parameter group to your RDS instance.
5. Reboot the instance to apply changes.

For detailed console instructions, refer to [Working with parameter groups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html) in the AWS documentation.

### Use Terraform

Using Terraform with `aws_db_parameter_group`:

hcl ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```hcl
resource "aws_db_parameter_group" "rds_postgres_monitoring" {
  name   = "<INSTANCE_NAME>-monitoring-params"
  family = "postgres14"

  parameter {
    name         = "shared_preload_libraries"
    value        = "pg_stat_statements"
    apply_method = "pending-reboot"
  }

  parameter {
    name         = "pg_stat_statements.track"
    value        = "all"
    apply_method = "pending-reboot"
  }

  parameter {
    name         = "track_activity_query_size"
    value        = "4096"
    apply_method = "pending-reboot"
  }
}
```

Replace *&lt;INSTANCE\_NAME&gt;* with your RDS instance name.

> Note
> 
> If you already have a parameter group with `rds.logical_replication` enabled, for example, for replication to other services, add the `pg_stat_statements` parameters to that existing group rather than creating a new one.

After applying the parameter group to your instance and restarting, enable the extension in each database you want to monitor:

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
```

Verify the extension is installed:

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
SELECT * FROM pg_stat_statements LIMIT 1;
```

## Create a monitoring user and grant required privileges

Connect to your RDS PostgreSQL instance as an administrator and create the monitoring user:

Create the `db-o11y` user and grant base privileges:

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
CREATE USER "db-o11y" WITH PASSWORD '<DB_O11Y_PASSWORD>';
GRANT pg_monitor TO "db-o11y";
GRANT pg_read_all_stats TO "db-o11y";
```

Replace *&lt;DB\_O11Y\_PASSWORD&gt;* with a secure password for the `db-o11y` PostgreSQL user.

Verify that the user has the correct privileges to query `pg_stat_statements`:

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
-- run with the `db-o11y` user
SELECT * FROM pg_stat_statements LIMIT 1;
```

## Disable tracking of monitoring user queries

Prevent tracking of queries executed by the monitoring user itself:

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
ALTER ROLE "db-o11y" SET pg_stat_statements.track = 'none';
```

## Grant object privileges for detailed data

To allow collecting schema details and table information, connect to each logical database and grant access to each schema.

For example, for a `payments` database:

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
-- switch to the 'payments' database
\c payments

-- grant permissions in the 'public' schema
GRANT USAGE ON SCHEMA public TO "db-o11y";
GRANT SELECT ON ALL TABLES IN SCHEMA public TO "db-o11y";

-- grant permissions in the 'tests' schema
GRANT USAGE ON SCHEMA tests TO "db-o11y";
GRANT SELECT ON ALL TABLES IN SCHEMA tests TO "db-o11y";
```

Alternatively, if you’re unsure which specific schemas need access, use the predefined role to grant USAGE and SELECT access to all objects:

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
GRANT pg_read_all_data TO "db-o11y";
```

## Verify parameter group settings

Verify that the parameter settings were applied correctly after restarting:

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
SHOW pg_stat_statements.track;
```

Expected result: Value is `all`.

SQL ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```sql
SHOW track_activity_query_size;
```

Expected result: Value is `4096`.

## Run and configure Alloy

Run Alloy and add the Database Observability configuration for your RDS instance.

### Run the latest Alloy version

Alloy `1.16.0` or later is required for Database Observability. Find the latest stable version on [Docker Hub](https://hub.docker.com/r/grafana/alloy/tags). To update, refer to the [Alloy release notes](https://github.com/grafana/alloy/releases).

### Add the RDS PostgreSQL configuration blocks

Add these blocks to Alloy for RDS PostgreSQL. Replace *&lt;DB\_NAME&gt;*. Create a `local.file` with the Data Source Name string, for example, `"postgresql://<DB_USER>:<DB_PASSWORD>@<INSTANCE_ENDPOINT>:<DB_PORT>/<DB_DATABASE>?sslmode=require"`:

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
local.file "postgres_secret_<DB_NAME>" {
  filename  = "/var/lib/alloy/postgres_secret_<DB_NAME>"
  is_secret = true
}

prometheus.exporter.postgres "postgres_<DB_NAME>" {
  data_source_names  = [local.file.postgres_secret_<DB_NAME>.content]
  enabled_collectors = ["stat_statements"]

  stat_statements {
    exclude_users      = ["db-o11y", "rdsadmin"]
    exclude_databases  = ["rdsadmin"]
  }

  autodiscovery {
    enabled = true

    // Exclude the rdsadmin database on RDS
    database_denylist = ["rdsadmin"]
  }
}

database_observability.postgres "postgres_<DB_NAME>" {
  data_source_name  = local.file.postgres_secret_<DB_NAME>.content
  forward_to        = [loki.relabel.database_observability_postgres_<DB_NAME>.receiver]
  targets           = prometheus.exporter.postgres.postgres_<DB_NAME>.targets
  enable_collectors = ["query_details", "query_samples", "schema_details", "explain_plans"]
  exclude_users     = ["db-o11y", "rdsadmin"]
  exclude_databases = ["rdsadmin"]

  cloud_provider {
    aws {
      arn = "<AWS_RDS_INSTANCE_ARN>"
    }
  }
}

loki.relabel "database_observability_postgres_<DB_NAME>" {
  forward_to = [loki.write.logs_service.receiver]

  rule {
    target_label = "instance"
    replacement  = "<INSTANCE_LABEL>"
  }
}

discovery.relabel "database_observability_postgres_<DB_NAME>" {
  targets = database_observability.postgres.postgres_<DB_NAME>.targets

  rule {
    target_label = "job"
    replacement  = "integrations/db-o11y"
  }

  // OPTIONAL: relabel `instance` to `dsn` before overwriting `instance`;
  // the `dsn` label is used in the integration with the knowledge graph
  rule {
    source_labels = ["instance"]
    target_label  = "dsn"
  }
  rule {
    target_label = "instance"
    replacement  = "<INSTANCE_LABEL>"
  }
}

prometheus.scrape "database_observability_postgres_<DB_NAME>" {
  targets    = discovery.relabel.database_observability_postgres_<DB_NAME>.output
  forward_to = [prometheus.remote_write.metrics_service.receiver]
}
```

Replace the placeholders:

- `DB_NAME`: Database name Alloy uses in component identifiers (appears in component names and secret filenames).
- `AWS_RDS_INSTANCE_ARN`: Amazon RDS instance ARN for cloud provider integration.
- `INSTANCE_LABEL`: Value that sets the `instance` label on logs and metrics (optional).
- Secret file content example: `"postgresql://DB_USER:DB_PASSWORD@INSTANCE_ENDPOINT:DB_PORT/DB_DATABASE?sslmode=require"`.
  
  - `DB_USER`: Database user Alloy uses to connect (for example, `db-o11y`).
  - `DB_PASSWORD`: Password for the database user.
  - `INSTANCE_ENDPOINT`: RDS instance endpoint hostname.
  - `DB_PORT`: Database port number (default: `5432`).
  - `DB_DATABASE`: Logical database name in the DSN (recommend: use `postgres`).

Find more about the options supported by the `database_observability.postgres` component in the [reference](/docs/grafana-cloud/send-data/alloy/reference/components/database_observability/database_observability.postgres/) documentation.

### Add processing of PostgreSQL logs (optional)

Add processing of PostgreSQL logs to gather detailed metrics about query and server errors.

The `logs` collector processes PostgreSQL logs received through the `logs_receiver` entry point and exports Prometheus metrics for query and server errors.

#### Configure `log_line_prefix`

Configure `log_line_prefix` via RDS parameter groups:

1. Open the AWS RDS Console → Parameter Groups
2. Create or modify your parameter group
3. Set `log_line_prefix` to: `%m:%r:%u@%d:[%p]:%l:%e:%s:%v:%x:%c:%q%a:`
4. Apply the parameter group to your RDS instance

> Note
> 
> Ensure [CloudWatch Logs export is enabled](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Procedural.UploadtoCloudWatch.html) for `Error log` and `General log` in your RDS instance settings.

#### AWS Credentials

The `otelcol.receiver.awscloudwatch` component requires AWS credentials. Configure with:

- Environment variables: `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_REGION`
- Docker: Mount `~/.aws` credentials or pass environment variables
- Kubernetes: Use [IAM Roles for Service Accounts (IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) or Kubernetes secrets

Required IAM permissions:

JSON ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```json
{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Action": [
      "logs:FilterLogEvents",
      "logs:GetLogEvents",
      "logs:DescribeLogGroups",
      "logs:DescribeLogStreams"
    ],
    "Resource": "arn:aws:logs:*:*:log-group:/aws/rds/instance/*" // Place your log-group arn(s) here
  }]
}
```

#### Add logs processing configuration

Add the logs file processing configuration block with [`otelcol.receiver.awscloudwatch`](/docs/grafana-cloud/send-data/alloy/reference/components/otelcol/otelcol.receiver.awscloudwatch/). Replace *&lt;AWS\_REGION&gt;*, *&lt;RDS\_INSTANCE\_NAME&gt;*, *&lt;INSTANCE\_LABEL&gt;*, and *&lt;DB\_NAME&gt;* (matching the database name used in the main configuration above):

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
// Storage for CloudWatch state persistence
otelcol.storage.file "cloudwatch" {
  directory = "/var/lib/alloy/storage"
}

// Fetch logs from CloudWatch
otelcol.receiver.awscloudwatch "rds_logs" {
  region  = "<AWS_REGION>"
  storage = otelcol.storage.file.cloudwatch.handler

  logs {
    poll_interval = "1m"
    start_from = "2026-02-01T00:00:00Z" // Set this date to the closest possible to when you want to account logs from

    groups {
      named {
         // Insert your Postgres RDS Cloudwatch log group here
        group_name = "/aws/rds/instance/<RDS_INSTANCE_NAME>/postgresql"
      }
    }
  }

  output {
    logs = [otelcol.processor.transform.rds_logs.input]
  }
}

// Set labels so the Loki exporter promotes them correctly
otelcol.processor.transform "rds_logs" {
  log_statements {
    context = "log"
    statements = [
      `set(attributes["instance"], "<INSTANCE_LABEL>")`,
      `set(attributes["job"], "integrations/db-o11y")`,
      `set(attributes["loki.format"], "raw")`,
      `set(attributes["loki.attribute.labels"], "instance,job")`,
    ]
  }
  output {
    logs = [otelcol.exporter.loki.rds_logs.input]
  }
}

// Convert OTLP to Loki format
otelcol.exporter.loki "rds_logs" {
  forward_to = [database_observability.postgres.postgres_<DB_NAME>.logs_receiver]
}
```

> Note
> 
> `otelcol.receiver.awscloudwatch` is an experimental component. To enable and use an experimental component, you must run Alloy with `--stability.level=experimental`.

> Note
> 
> **Persistent storage:** The data path (`--storage.path`) must be persisted across restarts to maintain `otelcol.storage.file` checkpoint file.

#### Historical Log Processing

The `logs` collector only processes logs with timestamps after the collector’s start time. This prevents re-counting historical logs when the source component replays old entries.

**Behavior:**

- On startup: Skips logs with timestamps before the collector started
- Relies on the source component features to prevent duplicate log ingestion across restarts

### Add Prometheus and Loki write configuration

Add the Prometheus remote write and Loki write configuration. From Grafana Cloud, open your stack to get the URLs and generate API tokens:

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
prometheus.remote_write "metrics_service" {
  endpoint {
    url = sys.env("GCLOUD_HOSTED_METRICS_URL")

    basic_auth {
      password = sys.env("GCLOUD_RW_API_KEY")
      username = sys.env("GCLOUD_HOSTED_METRICS_ID")
    }
  }
}

loki.write "logs_service" {
  endpoint {
    url = sys.env("GCLOUD_HOSTED_LOGS_URL")

    basic_auth {
      password = sys.env("GCLOUD_RW_API_KEY")
      username = sys.env("GCLOUD_HOSTED_LOGS_ID")
    }
  }
}
```

Replace the placeholders:

- `GCLOUD_HOSTED_METRICS_URL`: Your Grafana Cloud Prometheus remote write URL.
- `GCLOUD_HOSTED_METRICS_ID`: Your Grafana Cloud Prometheus instance ID (username).
- `GCLOUD_HOSTED_LOGS_URL`: Your Grafana Cloud Loki write URL.
- `GCLOUD_HOSTED_LOGS_ID`: Your Grafana Cloud Loki instance ID (username).
- `GCLOUD_RW_API_KEY`: Grafana Cloud API token with write permissions.

## Run and configure Alloy with the Grafana Kubernetes Monitoring Helm chart

Extend your `values.yaml` when you use the k8s-monitoring Helm chart and set `databaseObservability.enabled` to true within the PostgreSQL integration.

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
integrations:
  collector: alloy-singleton
  postgresql:
    instances:
      - name: <INSTANCE_NAME>
        exporter:
          dataSource:
            host: <INSTANCE_ENDPOINT>
            port: 5432
            database: postgres
            sslmode: require
            auth:
              usernameKey: username
              passwordKey: password
          collectors:
            statStatements: true
        databaseObservability:
          enabled: true
          extraConfig: |
            exclude_databases = ["rdsadmin"]
            cloud_provider {
              aws {
                arn = "<AWS_RDS_INSTANCE_ARN>"
              }
            }
          collectors:
            queryDetails:
              enabled: true
            querySamples:
              enabled: true
            schemaDetails:
              enabled: true
            explainPlans:
              enabled: true
        secret:
          create: false
          name: <SECRET_NAME>
          namespace: <NAMESPACE>
        logs:
          enabled: true
          labelSelectors:
            app.kubernetes.io/instance: <INSTANCE_NAME>
```

Replace the placeholders:

- `INSTANCE_NAME`: Name for this database instance in Kubernetes.
- `INSTANCE_ENDPOINT`: RDS instance endpoint hostname.
- `AWS_RDS_INSTANCE_ARN`: Amazon RDS instance ARN for cloud provider integration.
- `SECRET_NAME`: Name of the Kubernetes secret containing database credentials.
- `NAMESPACE`: Kubernetes namespace where the secret exists.

To see the full set of values, check out the k8s-monitoring Helm chart [documentation](https://github.com/grafana/k8s-monitoring-helm/blob/main/charts/k8s-monitoring/charts/feature-integrations/docs/integrations/postgresql.md#database-observability) or the [example configuration](https://github.com/grafana/k8s-monitoring-helm/tree/main/charts/k8s-monitoring/docs/examples/features/database-observability/postgresql).

## Optional: Configure AWS Secrets Manager and Kubernetes

If you use AWS Secrets Manager with External Secrets Operator to manage database credentials, configure them as follows.

### Secret path convention

Store monitoring credentials in AWS Secrets Manager at a path following this convention:

![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```none
/kubernetes/rds/<INSTANCE_NAME>/monitoring
```

### PostgreSQL secret format

Store the secret as JSON with the following format:

JSON ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```json
{
  "username": "db-o11y",
  "password": "<DB_O11Y_PASSWORD>",
  "engine": "postgres",
  "host": "<INSTANCE_ENDPOINT>.rds.amazonaws.com",
  "port": 5432,
  "dbInstanceIdentifier": "<INSTANCE_NAME>",
  "database": "postgres"
}
```

Replace the placeholders:

- `DB_O11Y_PASSWORD`: Password for the `db-o11y` PostgreSQL user.
- `INSTANCE_ENDPOINT`: RDS instance endpoint hostname.
- `INSTANCE_NAME`: RDS instance name.

### Create the secret via AWS CLI

Bash ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```bash
aws secretsmanager create-secret \
  --name "/kubernetes/rds/<INSTANCE_NAME>/monitoring" \
  --description "Alloy monitoring credentials for RDS PostgreSQL instance" \
  --secret-string '{"username":"db-o11y","password":"<DB_O11Y_PASSWORD>","engine":"postgres","host":"<INSTANCE_ENDPOINT>.rds.amazonaws.com","port":5432,"dbInstanceIdentifier":"<INSTANCE_NAME>","database":"postgres"}'
```

### Kubernetes External Secrets configuration

Use the External Secrets Operator to sync the AWS secret into Kubernetes:

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
---
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: <INSTANCE_NAME>-db-monitoring-secretstore
spec:
  provider:
    aws:
      service: SecretsManager
      region: <AWS_REGION>
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: <INSTANCE_NAME>-db-monitoring-secret
spec:
  refreshInterval: 1h
  secretStoreRef:
    kind: SecretStore
    name: <INSTANCE_NAME>-db-monitoring-secretstore
  dataFrom:
    - extract:
        conversionStrategy: Default
        decodingStrategy: None
        key: /kubernetes/rds/<INSTANCE_NAME>/monitoring
        metadataPolicy: None
        version: AWSCURRENT
```

Replace the placeholders:

- `INSTANCE_NAME`: RDS instance name.
- `AWS_REGION`: AWS region where the secret is stored.

## Next steps

- [Introduction to Database Observability](/docs/grafana-cloud/monitor-applications/database-observability/introduction/)
- [Troubleshoot](/docs/grafana-cloud/monitor-applications/database-observability/troubleshoot/)
