Grafana Cloud

Set up AWS Aurora PostgreSQL

Note

Database Observability is currently in public preview. Grafana Labs offers limited support, and breaking changes might occur prior to the feature being made generally available.

Set up Database Observability with Grafana Cloud to collect telemetry from AWS Aurora PostgreSQL clusters using Grafana Alloy. You configure your Aurora cluster and Alloy to forward telemetry to Grafana Cloud.

What you’ll achieve

In this article, you:

  • Configure Aurora PostgreSQL cluster parameter groups for monitoring.
  • Create monitoring users with required privileges.
  • Configure Alloy with the Database Observability components.
  • Forward telemetry to Grafana Cloud.

Before you begin

Review these requirements:

  • Aurora PostgreSQL 14.0 or later.
  • Access to modify Aurora cluster parameter groups.
  • Grafana Alloy deployed and accessible to your Aurora cluster.
  • Network connectivity between Alloy and your Aurora cluster endpoint.

For general PostgreSQL setup concepts, refer to Set up PostgreSQL.

Configure the DB cluster parameter group

Enable pg_stat_statements and configure query tracking by adding parameters to your Aurora PostgreSQL cluster parameter group. These parameters require a cluster restart to take effect.

Required parameters

ParameterValueNotes
shared_preload_librariespg_stat_statementsRequires restart
pg_stat_statements.trackallRequires restart
track_activity_query_size4096Requires restart

Using the AWS Console

  1. Open the RDS Console and navigate to Parameter groups.
  2. Create a new cluster parameter group or modify an existing one with family aurora-postgresql14.
  3. Set the parameters listed above.
  4. Apply the parameter group to your Aurora cluster.
  5. Reboot the cluster to apply changes.

For detailed console instructions, refer to Working with parameter groups in the AWS documentation.

Using Terraform

Using Terraform with the terraform-aws-modules/rds-aurora/aws module:

hcl
create_db_cluster_parameter_group      = true
db_cluster_parameter_group_family      = "aurora-postgresql14"
db_cluster_parameter_group_name        = "<CLUSTER_NAME>-parameter-group"
db_cluster_parameter_group_description = "Parameter group with pg_stat_statements for monitoring"
db_cluster_parameter_group_parameters = [
  {
    name         = "shared_preload_libraries"
    value        = "pg_stat_statements"
    apply_method = "pending-reboot"
  },
  {
    name         = "pg_stat_statements.track"
    value        = "all"
    apply_method = "pending-reboot"
  },
  {
    name         = "track_activity_query_size"
    value        = "4096"
    apply_method = "pending-reboot"
  },
]

Or using a standalone aws_rds_cluster_parameter_group resource:

hcl
resource "aws_rds_cluster_parameter_group" "aurora_postgres_monitoring" {
  name        = "<CLUSTER_NAME>-parameter-group"
  family      = "aurora-postgresql14"
  description = "Parameter group with pg_stat_statements for monitoring"

  parameter {
    name         = "shared_preload_libraries"
    value        = "pg_stat_statements"
    apply_method = "pending-reboot"
  }

  parameter {
    name         = "pg_stat_statements.track"
    value        = "all"
    apply_method = "pending-reboot"
  }

  parameter {
    name         = "track_activity_query_size"
    value        = "4096"
    apply_method = "pending-reboot"
  }
}

Replace <CLUSTER_NAME> with your Aurora cluster name.

Note

If you already have a parameter group with rds.logical_replication enabled, for example, for replication to other services, add the pg_stat_statements parameters to that existing group rather than creating a new one.

After applying the parameter group to your cluster and restarting, enable the extension in each database you want to monitor:

SQL
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

Verify the extension is installed:

SQL
SELECT * FROM pg_stat_statements LIMIT 1;

Create a monitoring user and grant required privileges

Connect to your Aurora PostgreSQL cluster as an admin user and create the monitoring user:

SQL
CREATE USER "db-o11y" WITH PASSWORD '<DB_O11Y_PASSWORD>';
GRANT pg_monitor TO "db-o11y";
GRANT pg_read_all_stats TO "db-o11y";

Replace <DB_O11Y_PASSWORD> with a secure password for the db-o11y PostgreSQL user.

Disable tracking of monitoring user queries

Prevent tracking of queries executed by the monitoring user itself:

SQL
ALTER ROLE "db-o11y" SET pg_stat_statements.track = 'none';

Grant object privileges for detailed data

To allow collecting schema details and table information, connect to each logical database and grant access to each schema.

For example, for a payments database:

SQL
-- switch to the 'payments' database
\c payments

-- grant permissions in the 'public' schema
GRANT USAGE ON SCHEMA public TO "db-o11y";
GRANT SELECT ON ALL TABLES IN SCHEMA public TO "db-o11y";

Alternatively, if you’re unsure which specific schemas need access, use the predefined role to grant USAGE and SELECT access to all objects:

SQL
GRANT pg_read_all_data TO "db-o11y";

Verify user privileges

Verify that the user can query pg_stat_statements:

SQL
-- run with the `db-o11y` user
SELECT * FROM pg_stat_statements LIMIT 1;

If this query fails with “permission denied”, ensure the GRANT statements were executed correctly.

Verify parameter group settings

Verify that the parameter group settings were applied correctly after restarting the cluster:

SQL
SHOW pg_stat_statements.track;

Expected result: Value is all.

SQL
SHOW track_activity_query_size;

Expected result: Value is 4096.

Run and configure Alloy

Run Alloy and add the Database Observability configuration for your Aurora cluster.

Run the latest Alloy version

Run Alloy version 1.12.0 or later with the --stability.level=public-preview flag for the database_observability components. Find the latest stable version on Docker Hub.

Add the Aurora PostgreSQL configuration blocks

Note: If you are using an Aurora primary/replica cluster setup, you must configure Grafana Alloy to connect to each instance endpoint individually—not the cluster endpoint. This ensures metrics and logs are correctly correlated with each node, and data is not missed during failovers or topology changes.

Add these blocks to Alloy for Aurora PostgreSQL. Replace <DB_NAME>. Create a local.file with the Data Source Name string, for example, "postgresql://<DB_USER>:<DB_PASSWORD>@(<INSTANCE_ENDPOINT>:<DB_PORT>)/<DB_DATABASE>?sslmode=require":

Alloy
local.file "postgres_secret_<DB_NAME>" {
  filename  = "/var/lib/alloy/postgres_secret_<DB_NAME>"
  is_secret = true
}

prometheus.exporter.postgres "postgres_<DB_NAME>" {
  data_source_names  = local.file.postgres_secret_<DB_NAME>.content
  enabled_collectors = ["stat_statements"]

  autodiscovery {
    enabled = true

    // Exclude the rdsadmin database on Aurora
    database_denylist = ["rdsadmin"]
  }
}

database_observability.postgres "postgres_<DB_NAME>" {
  data_source_name  = local.file.postgres_secret_<DB_NAME>.content
  forward_to        = [loki.relabel.database_observability_postgres_<DB_NAME>.receiver]
  targets           = prometheus.exporter.postgres.postgres_<DB_NAME>.targets
  enable_collectors = ["query_details", "query_samples", "schema_details", "explain_plans"]

  cloud_provider {
    aws {
      arn = "<AWS_AURORA_INSTANCE_ARN>"
    }
  }
}

loki.relabel "database_observability_postgres_<DB_NAME>" {
  forward_to = [loki.write.logs_service.receiver]

  rule {
    target_label = "instance"
    replacement  = "<INSTANCE_LABEL>"
  }
}

discovery.relabel "database_observability_postgres_<DB_NAME>" {
  targets = database_observability.postgres.postgres_<DB_NAME>.targets

  rule {
    target_label = "job"
    replacement  = "integrations/db-o11y"
  }

  rule {
    target_label = "instance"
    replacement  = "<INSTANCE_LABEL>"
  }
}

prometheus.scrape "database_observability_postgres_<DB_NAME>" {
  targets    = discovery.relabel.database_observability_postgres_<DB_NAME>.output
  forward_to = [prometheus.remote_write.metrics_service.receiver]
}

Replace the placeholders:

  • DB_NAME: Database name Alloy uses in component identifiers (appears in component names and secret filenames).
  • AWS_AURORA_INSTANCE_ARN: The specific AWS Aurora instance ARN for cloud provider integration. Do not use the Cluster ARN.
  • INSTANCE_LABEL: Value that sets the instance label on logs and metrics (optional).
  • Secret file content example: "postgresql://DB_USER:DB_PASSWORD@(INSTANCE_ENDPOINT:DB_PORT)/DB_DATABASE?sslmode=require".
    • DB_USER: Database user Alloy uses to connect (e.g. db-o11y).
    • DB_PASSWORD: Password for the database user.
    • INSTANCE_ENDPOINT: The specific instance endpoint. Do not use the Cluster Endpoint here; doing so will break metric correlation during failovers.
    • DB_PORT: Database port number (default: 5432).
    • DB_DATABASE: Logical database name in the DSN (recommend: use postgres).

Add Prometheus and Loki write configuration

Add the Prometheus remote write and Loki write configuration. From Grafana Cloud, open your stack to get the URLs and generate API tokens:

Alloy
prometheus.remote_write "metrics_service" {
  endpoint {
    url = sys.env("GCLOUD_HOSTED_METRICS_URL")

    basic_auth {
      password = sys.env("GCLOUD_RW_API_KEY")
      username = sys.env("GCLOUD_HOSTED_METRICS_ID")
    }
  }
}

loki.write "logs_service" {
  endpoint {
    url = sys.env("GCLOUD_HOSTED_LOGS_URL")

    basic_auth {
      password = sys.env("GCLOUD_RW_API_KEY")
      username = sys.env("GCLOUD_HOSTED_LOGS_ID")
    }
  }
}

Replace the placeholders:

  • GCLOUD_HOSTED_METRICS_URL: Your Grafana Cloud Prometheus remote write URL.
  • GCLOUD_HOSTED_METRICS_ID: Your Grafana Cloud Prometheus instance ID (username).
  • GCLOUD_HOSTED_LOGS_URL: Your Grafana Cloud Loki write URL.
  • GCLOUD_HOSTED_LOGS_ID: Your Grafana Cloud Loki instance ID (username).
  • GCLOUD_RW_API_KEY: Grafana Cloud API token with write permissions.

Run and configure Alloy with the Grafana Kubernetes Monitoring Helm chart

Extend your values.yaml when you use the k8s-monitoring Helm chart and set databaseObservability.enabled to true within the PostgreSQL integration.

YAML
integrations:
  collector: alloy-singleton
  postgresql:
    instances:
      - name: <INSTANCE_NAME>
        exporter:
          dataSource:
            host: <INSTANCE_ENDPOINT> # Must be specific instance endpoint
            port: 5432
            database: postgres
            sslmode: require
            auth:
              usernameKey: username
              passwordKey: password
          collectors:
            statStatements: true
        databaseObservability:
          enabled: true
          extraConfig: |
            cloud_provider {
              aws {
                arn = "<AWS_AURORA_INSTANCE_ARN>"
              }
            }
          collectors:
            queryDetails:
              enabled: true
            querySamples:
              enabled: true
            schemaDetails:
              enabled: true
            explainPlans:
              enabled: true
        secret:
          create: false
          name: <SECRET_NAME>
          namespace: <NAMESPACE>
        logs:
          enabled: true
          labelSelectors:
            app.kubernetes.io/instance: <INSTANCE_NAME>

Replace the placeholders:

  • INSTANCE_NAME: Name for this database instance in Kubernetes.
  • INSTANCE_ENDPOINT: The specific instance endpoint. Do not use the Cluster Endpoint here; doing so will break metric correlation during failovers.
  • AWS_AURORA_INSTANCE_ARN: The specific AWS Aurora instance ARN.
  • SECRET_NAME: Name of the Kubernetes secret containing database credentials.
  • NAMESPACE: Kubernetes namespace where the secret exists.

To see the full set of values, check out the k8s-monitoring Helm chart documentation or the example configuration.

Optional: Configure AWS Secrets Manager and Kubernetes

If you use AWS Secrets Manager with External Secrets Operator to manage database credentials, configure them as follows.

Secret path convention

Store monitoring credentials in AWS Secrets Manager at a path following this convention:

/kubernetes/rds/<CLUSTER_NAME>/monitoring

PostgreSQL secret format

Store the secret as JSON with the following format:

JSON
{
  "username": "db-o11y",
  "password": "<DB_O11Y_PASSWORD>",
  "engine": "postgres",
  "host": "<INSTANCE_ENDPOINT>.rds.amazonaws.com",
  "port": 5432,
  "dbClusterIdentifier": "<CLUSTER_NAME>",
  "database": "postgres"
}

Replace the placeholders:

  • DB_O11Y_PASSWORD: Password for the db-o11y PostgreSQL user.
  • INSTANCE_ENDPOINT: The specific instance endpoint. Do not use the Cluster Endpoint here; doing so will break metric correlation during failovers.
  • CLUSTER_NAME: Aurora cluster name.

Create the secret via AWS CLI

Bash
aws secretsmanager create-secret \
  --name "/kubernetes/rds/<CLUSTER_NAME>/monitoring" \
  --description "Alloy monitoring credentials for Aurora PostgreSQL cluster" \
  --secret-string '{"username":"db-o11y","password":"<DB_O11Y_PASSWORD>","engine":"postgres","host":"<INSTANCE_ENDPOINT>.rds.amazonaws.com","port":5432,"dbClusterIdentifier":"<CLUSTER_NAME>","database":"postgres"}'

Kubernetes External Secrets configuration

Use the External Secrets Operator to sync the AWS secret into Kubernetes:

YAML
---
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: <CLUSTER_NAME>-db-monitoring-secretstore
spec:
  provider:
    aws:
      service: SecretsManager
      region: <AWS_REGION>
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: <CLUSTER_NAME>-db-monitoring-secret
spec:
  refreshInterval: 1h
  secretStoreRef:
    kind: SecretStore
    name: <CLUSTER_NAME>-db-monitoring-secretstore
  dataFrom:
    - extract:
        conversionStrategy: Default
        decodingStrategy: None
        key: /kubernetes/rds/<CLUSTER_NAME>/monitoring
        metadataPolicy: None
        version: AWSCURRENT

Replace the placeholders:

  • CLUSTER_NAME: Aurora cluster name.
  • AWS_REGION: AWS region where the secret is stored.

Next steps