Set up AWS Aurora PostgreSQL
Set up Database Observability with Grafana Cloud to collect telemetry from AWS Aurora PostgreSQL clusters using Grafana Alloy. You configure your Aurora cluster and Alloy to forward telemetry to Grafana Cloud.
What you’ll achieve
In this article, you:
- Configure Aurora PostgreSQL cluster parameter groups for monitoring.
- Create monitoring users with required privileges.
- Configure Alloy with the Database Observability components.
- Forward telemetry to Grafana Cloud.
Before you begin
Review these requirements:
- Aurora PostgreSQL 14.0 or later.
- Access to modify Aurora cluster parameter groups.
- Grafana Alloy deployed and accessible to your Aurora cluster.
- Network connectivity between Alloy and your Aurora cluster endpoint.
For general PostgreSQL setup concepts, refer to Set up PostgreSQL.
Configure the DB cluster parameter group
Enable pg_stat_statements and configure query tracking by adding parameters to your Aurora PostgreSQL cluster parameter group. These parameters require a cluster restart to take effect.
Required parameters
Use the Amazon RDS console
- Open the RDS Console and navigate to Parameter groups.
- Create a new cluster parameter group or modify an existing one with family
aurora-postgresql14. - Set the parameters listed above.
- Apply the parameter group to your Aurora cluster.
- Reboot the cluster to apply changes.
For detailed console instructions, refer to Working with parameter groups in the AWS documentation.
Use Terraform
Using Terraform with the terraform-aws-modules/rds-aurora/aws module:
create_db_cluster_parameter_group = true
db_cluster_parameter_group_family = "aurora-postgresql14"
db_cluster_parameter_group_name = "<CLUSTER_NAME>-parameter-group"
db_cluster_parameter_group_description = "Parameter group with pg_stat_statements for monitoring"
db_cluster_parameter_group_parameters = [
{
name = "shared_preload_libraries"
value = "pg_stat_statements"
apply_method = "pending-reboot"
},
{
name = "pg_stat_statements.track"
value = "all"
apply_method = "pending-reboot"
},
{
name = "track_activity_query_size"
value = "4096"
apply_method = "pending-reboot"
},
]Or using a standalone aws_rds_cluster_parameter_group resource:
resource "aws_rds_cluster_parameter_group" "aurora_postgres_monitoring" {
name = "<CLUSTER_NAME>-parameter-group"
family = "aurora-postgresql14"
description = "Parameter group with pg_stat_statements for monitoring"
parameter {
name = "shared_preload_libraries"
value = "pg_stat_statements"
apply_method = "pending-reboot"
}
parameter {
name = "pg_stat_statements.track"
value = "all"
apply_method = "pending-reboot"
}
parameter {
name = "track_activity_query_size"
value = "4096"
apply_method = "pending-reboot"
}
}Replace <CLUSTER_NAME> with your Aurora cluster name.
Note
If you already have a parameter group with
rds.logical_replicationenabled, for example, for replication to other services, add thepg_stat_statementsparameters to that existing group rather than creating a new one.
After applying the parameter group to your cluster and restarting, enable the extension in each database you want to monitor:
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;Verify the extension is installed:
SELECT * FROM pg_stat_statements LIMIT 1;Create a monitoring user and grant required privileges
Connect to your Aurora PostgreSQL cluster as an administrator and create the monitoring user:
Disable tracking of monitoring user queries
Grant object privileges for detailed data
Verify parameter group settings
Run and configure Alloy
Run Alloy and add the Database Observability configuration for your Aurora cluster.
Run the latest Alloy version
Add the Aurora PostgreSQL configuration blocks
Note
If you are using an Aurora primary/replica cluster setup, you must configure Grafana Alloy to connect to each instance endpoint individually, not the cluster endpoint. This ensures metrics and logs are correctly correlated with each node, and data is not missed during role changes or topology changes.
Add these blocks to Alloy for Aurora PostgreSQL. Replace <DB_NAME>. Create a local.file with the Data Source Name string, for example, "postgresql://<DB_USER>:<DB_PASSWORD>@<INSTANCE_ENDPOINT>:<DB_PORT>/<DB_DATABASE>?sslmode=require":
local.file "postgres_secret_<DB_NAME>" {
filename = "/var/lib/alloy/postgres_secret_<DB_NAME>"
is_secret = true
}
prometheus.exporter.postgres "postgres_<DB_NAME>" {
data_source_names = [local.file.postgres_secret_<DB_NAME>.content]
enabled_collectors = ["stat_statements"]
autodiscovery {
enabled = true
// Exclude the rdsadmin database on Aurora
database_denylist = ["rdsadmin"]
}
}
database_observability.postgres "postgres_<DB_NAME>" {
data_source_name = local.file.postgres_secret_<DB_NAME>.content
forward_to = [loki.relabel.database_observability_postgres_<DB_NAME>.receiver]
targets = prometheus.exporter.postgres.postgres_<DB_NAME>.targets
enable_collectors = ["query_details", "query_samples", "schema_details", "explain_plans"]
exclude_databases = ["rdsadmin"]
cloud_provider {
aws {
arn = "<AWS_AURORA_INSTANCE_ARN>"
}
}
}
loki.relabel "database_observability_postgres_<DB_NAME>" {
forward_to = [loki.write.logs_service.receiver]
rule {
target_label = "instance"
replacement = "<INSTANCE_LABEL>"
}
}
discovery.relabel "database_observability_postgres_<DB_NAME>" {
targets = database_observability.postgres.postgres_<DB_NAME>.targets
rule {
target_label = "job"
replacement = "integrations/db-o11y"
}
rule {
target_label = "instance"
replacement = "<INSTANCE_LABEL>"
}
}
prometheus.scrape "database_observability_postgres_<DB_NAME>" {
targets = discovery.relabel.database_observability_postgres_<DB_NAME>.output
forward_to = [prometheus.remote_write.metrics_service.receiver]
}Replace the placeholders:
DB_NAME: Database name Alloy uses in component identifiers (appears in component names and secret filenames).AWS_AURORA_INSTANCE_ARN: The specific Amazon Aurora instance Amazon Resource Name for cloud provider integration. Do not use the cluster Amazon Resource Name.INSTANCE_LABEL: Value that sets theinstancelabel on logs and metrics (optional).- Secret file content example:
"postgresql://DB_USER:DB_PASSWORD@INSTANCE_ENDPOINT:DB_PORT/DB_DATABASE?sslmode=require".DB_USER: Database user Alloy uses to connect (for example,db-o11y).DB_PASSWORD: Password for the database user.INSTANCE_ENDPOINT: The specific instance endpoint. Do not use the Cluster Endpoint here; doing so breaks metric correlation during role changes.DB_PORT: Database port number (default:5432).DB_DATABASE: Logical database name in the DSN (recommend: usepostgres).
Add Prometheus and Loki write configuration
Run and configure Alloy with the Grafana Kubernetes Monitoring Helm chart
Extend your values.yaml when you use the k8s-monitoring Helm chart and set databaseObservability.enabled to true within the PostgreSQL integration.
integrations:
collector: alloy-singleton
postgresql:
instances:
- name: <INSTANCE_NAME>
exporter:
dataSource:
host: <INSTANCE_ENDPOINT> # Must be specific instance endpoint
port: 5432
database: postgres
sslmode: require
auth:
usernameKey: username
passwordKey: password
collectors:
statStatements: true
databaseObservability:
enabled: true
extraConfig: |
exclude_databases = ["rdsadmin"]
cloud_provider {
aws {
arn = "<AWS_AURORA_INSTANCE_ARN>"
}
}
collectors:
queryDetails:
enabled: true
querySamples:
enabled: true
schemaDetails:
enabled: true
explainPlans:
enabled: true
secret:
create: false
name: <SECRET_NAME>
namespace: <NAMESPACE>
logs:
enabled: true
labelSelectors:
app.kubernetes.io/instance: <INSTANCE_NAME>Replace the placeholders:
INSTANCE_NAME: Name for this database instance in Kubernetes.INSTANCE_ENDPOINT: The specific instance endpoint. Do not use the Cluster Endpoint here; doing so breaks metric correlation during role changes.AWS_AURORA_INSTANCE_ARN: The specific Amazon Aurora instance Amazon Resource Name.SECRET_NAME: Name of the Kubernetes secret containing database credentials.NAMESPACE: Kubernetes namespace where the secret exists.
To see the full set of values, check out the k8s-monitoring Helm chart documentation or the example configuration.
Optional: Configure AWS Secrets Manager and Kubernetes
If you use AWS Secrets Manager with External Secrets Operator to manage database credentials, configure them as follows.
Secret path convention
Store monitoring credentials in AWS Secrets Manager at a path following this convention:
/kubernetes/rds/<CLUSTER_NAME>/monitoringPostgreSQL secret format
Store the secret as JSON with the following format:
{
"username": "db-o11y",
"password": "<DB_O11Y_PASSWORD>",
"engine": "postgres",
"host": "<INSTANCE_ENDPOINT>.rds.amazonaws.com",
"port": 5432,
"dbClusterIdentifier": "<CLUSTER_NAME>",
"database": "postgres"
}Replace the placeholders:
DB_O11Y_PASSWORD: Password for thedb-o11yPostgreSQL user.INSTANCE_ENDPOINT: The specific instance endpoint. Do not use the Cluster Endpoint here; doing so breaks metric correlation during role changes.CLUSTER_NAME: Aurora cluster name.
Create the secret via AWS CLI
aws secretsmanager create-secret \
--name "/kubernetes/rds/<CLUSTER_NAME>/monitoring" \
--description "Alloy monitoring credentials for Aurora PostgreSQL cluster" \
--secret-string '{"username":"db-o11y","password":"<DB_O11Y_PASSWORD>","engine":"postgres","host":"<INSTANCE_ENDPOINT>.rds.amazonaws.com","port":5432,"dbClusterIdentifier":"<CLUSTER_NAME>","database":"postgres"}'Kubernetes External Secrets configuration
Use the External Secrets Operator to sync the AWS secret into Kubernetes:
---
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: <CLUSTER_NAME>-db-monitoring-secretstore
spec:
provider:
aws:
service: SecretsManager
region: <AWS_REGION>
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: <CLUSTER_NAME>-db-monitoring-secret
spec:
refreshInterval: 1h
secretStoreRef:
kind: SecretStore
name: <CLUSTER_NAME>-db-monitoring-secretstore
dataFrom:
- extract:
conversionStrategy: Default
decodingStrategy: None
key: /kubernetes/rds/<CLUSTER_NAME>/monitoring
metadataPolicy: None
version: AWSCURRENTReplace the placeholders:
CLUSTER_NAME: Aurora cluster name.AWS_REGION: AWS region where the secret is stored.



