Set up AWS RDS PostgreSQL
Note
Database Observability is currently in public preview. Grafana Labs offers limited support, and breaking changes might occur prior to the feature being made generally available.
Set up Database Observability with Grafana Cloud to collect telemetry from AWS RDS PostgreSQL instances using Grafana Alloy. You configure your RDS instance and Alloy to forward telemetry to Grafana Cloud.
What you’ll achieve
In this article, you:
- Configure RDS PostgreSQL parameter groups for monitoring.
- Create monitoring users with required privileges.
- Configure Alloy with the Database Observability components.
- Forward telemetry to Grafana Cloud.
Before you begin
Review these requirements:
- RDS PostgreSQL 14.0 or later.
- Access to modify RDS parameter groups.
- Grafana Alloy deployed and accessible to your RDS instance.
- Network connectivity between Alloy and your RDS instance endpoint.
For general PostgreSQL setup concepts, refer to Set up PostgreSQL.
Configure the DB parameter group
Enable pg_stat_statements and configure query tracking by adding parameters to your RDS PostgreSQL parameter group. These parameters require an instance restart to take effect.
Required parameters
Using the AWS Console
- Open the RDS Console and navigate to Parameter groups.
- Create a new parameter group or modify an existing one with family
postgres14. - Set the parameters listed above.
- Apply the parameter group to your RDS instance.
- Reboot the instance to apply changes.
For detailed console instructions, refer to Working with parameter groups in the AWS documentation.
Using Terraform
Using Terraform with aws_db_parameter_group:
resource "aws_db_parameter_group" "rds_postgres_monitoring" {
name = "<INSTANCE_NAME>-monitoring-params"
family = "postgres14"
parameter {
name = "shared_preload_libraries"
value = "pg_stat_statements"
apply_method = "pending-reboot"
}
parameter {
name = "pg_stat_statements.track"
value = "all"
apply_method = "pending-reboot"
}
parameter {
name = "track_activity_query_size"
value = "4096"
apply_method = "pending-reboot"
}
}Replace <INSTANCE_NAME> with your RDS instance name.
Note
If you already have a parameter group with
rds.logical_replicationenabled, for example, for replication to other services, add thepg_stat_statementsparameters to that existing group rather than creating a new one.
After applying the parameter group to your instance and restarting, enable the extension in each database you want to monitor:
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;Verify the extension is installed:
SELECT * FROM pg_stat_statements LIMIT 1;Create a monitoring user and grant required privileges
Connect to your RDS PostgreSQL instance as an admin user and create the monitoring user:
CREATE USER "db-o11y" WITH PASSWORD '<DB_O11Y_PASSWORD>' CONNECTION LIMIT 5;
GRANT pg_monitor TO "db-o11y";
GRANT pg_read_all_stats TO "db-o11y";Replace <DB_O11Y_PASSWORD> with a secure password for the db-o11y PostgreSQL user.
Disable tracking of monitoring user queries
Prevent tracking of queries executed by the monitoring user itself:
ALTER ROLE "db-o11y" SET pg_stat_statements.track = 'none';Grant object privileges for detailed data
To allow collecting schema details and table information, connect to each logical database and grant access to each schema.
For example, for a payments database:
-- switch to the 'payments' database
\c payments
-- grant permissions in the 'public' schema
GRANT USAGE ON SCHEMA public TO "db-o11y";
GRANT SELECT ON ALL TABLES IN SCHEMA public TO "db-o11y";Alternatively, if you’re unsure which specific schemas need access, use the predefined role to grant USAGE and SELECT access to all objects:
GRANT pg_read_all_data TO "db-o11y";Verify user privileges
Verify that the user can query pg_stat_statements:
-- run with the `db-o11y` user
SELECT * FROM pg_stat_statements LIMIT 1;If this query fails with “permission denied”, ensure the GRANT statements were executed correctly.
Verify parameter group settings
Verify that the parameter group settings were applied correctly after restarting the instance:
SHOW pg_stat_statements.track;Expected result: Value is all.
SHOW track_activity_query_size;Expected result: Value is 4096.
Run and configure Alloy
Run Alloy and add the Database Observability configuration for your RDS instance.
Run the latest Alloy version
Run Alloy version 1.12.0 or later with the --stability.level=public-preview flag for the database_observability components. Find the latest stable version on Docker Hub.
Add the RDS PostgreSQL configuration blocks
Add these blocks to Alloy for RDS PostgreSQL. Replace <DB_NAME>. Create a local.file with the Data Source Name string, for example, "postgresql://<DB_USER>:<DB_PASSWORD>@(<INSTANCE_ENDPOINT>:<DB_PORT>)/<DB_DATABASE>?sslmode=require":
local.file "postgres_secret_<DB_NAME>" {
filename = "/var/lib/alloy/postgres_secret_<DB_NAME>"
is_secret = true
}
prometheus.exporter.postgres "postgres_<DB_NAME>" {
data_source_names = local.file.postgres_secret_<DB_NAME>.content
enabled_collectors = ["stat_statements"]
autodiscovery {
enabled = true
// Exclude the rdsadmin database on RDS
database_denylist = ["rdsadmin"]
}
}
database_observability.postgres "postgres_<DB_NAME>" {
data_source_name = local.file.postgres_secret_<DB_NAME>.content
forward_to = [loki.relabel.database_observability_postgres_<DB_NAME>.receiver]
targets = prometheus.exporter.postgres.postgres_<DB_NAME>.targets
enable_collectors = ["query_details", "query_samples", "schema_details", "explain_plans"]
cloud_provider {
aws {
arn = "<AWS_RDS_INSTANCE_ARN>"
}
}
}
loki.relabel "database_observability_postgres_<DB_NAME>" {
forward_to = [loki.write.logs_service.receiver]
rule {
target_label = "instance"
replacement = "<INSTANCE_LABEL>"
}
}
discovery.relabel "database_observability_postgres_<DB_NAME>" {
targets = database_observability.postgres.postgres_<DB_NAME>.targets
rule {
target_label = "job"
replacement = "integrations/db-o11y"
}
rule {
target_label = "instance"
replacement = "<INSTANCE_LABEL>"
}
}
prometheus.scrape "database_observability_postgres_<DB_NAME>" {
targets = discovery.relabel.database_observability_postgres_<DB_NAME>.output
forward_to = [prometheus.remote_write.metrics_service.receiver]
}Replace the placeholders:
DB_NAME: Database name Alloy uses in component identifiers (appears in component names and secret filenames).AWS_RDS_INSTANCE_ARN: AWS RDS instance ARN for cloud provider integration.INSTANCE_LABEL: Value that sets theinstancelabel on logs and metrics (optional).- Secret file content example:
"postgresql://DB_USER:DB_PASSWORD@(INSTANCE_ENDPOINT:DB_PORT)/DB_DATABASE?sslmode=require".DB_USER: Database user Alloy uses to connect (e.g.db-o11y).DB_PASSWORD: Password for the database user.INSTANCE_ENDPOINT: RDS instance endpoint hostname.DB_PORT: Database port number (default:5432).DB_DATABASE: Logical database name in the DSN (recommend: usepostgres).
Add Prometheus and Loki write configuration
Add the Prometheus remote write and Loki write configuration. From Grafana Cloud, open your stack to get the URLs and generate API tokens:
prometheus.remote_write "metrics_service" {
endpoint {
url = sys.env("GCLOUD_HOSTED_METRICS_URL")
basic_auth {
password = sys.env("GCLOUD_RW_API_KEY")
username = sys.env("GCLOUD_HOSTED_METRICS_ID")
}
}
}
loki.write "logs_service" {
endpoint {
url = sys.env("GCLOUD_HOSTED_LOGS_URL")
basic_auth {
password = sys.env("GCLOUD_RW_API_KEY")
username = sys.env("GCLOUD_HOSTED_LOGS_ID")
}
}
}Replace the placeholders:
GCLOUD_HOSTED_METRICS_URL: Your Grafana Cloud Prometheus remote write URL.GCLOUD_HOSTED_METRICS_ID: Your Grafana Cloud Prometheus instance ID (username).GCLOUD_HOSTED_LOGS_URL: Your Grafana Cloud Loki write URL.GCLOUD_HOSTED_LOGS_ID: Your Grafana Cloud Loki instance ID (username).GCLOUD_RW_API_KEY: Grafana Cloud API token with write permissions.
Run and configure Alloy with the Grafana Kubernetes Monitoring Helm chart
Extend your values.yaml when you use the k8s-monitoring Helm chart and set databaseObservability.enabled to true within the PostgreSQL integration.
integrations:
collector: alloy-singleton
postgresql:
instances:
- name: <INSTANCE_NAME>
exporter:
dataSource:
host: <INSTANCE_ENDPOINT>
port: 5432
database: postgres
sslmode: require
auth:
usernameKey: username
passwordKey: password
collectors:
statStatements: true
databaseObservability:
enabled: true
extraConfig: |
cloud_provider {
aws {
arn = "<AWS_RDS_INSTANCE_ARN>"
}
}
collectors:
queryDetails:
enabled: true
querySamples:
enabled: true
schemaDetails:
enabled: true
explainPlans:
enabled: true
secret:
create: false
name: <SECRET_NAME>
namespace: <NAMESPACE>
logs:
enabled: true
labelSelectors:
app.kubernetes.io/instance: <INSTANCE_NAME>Replace the placeholders:
INSTANCE_NAME: Name for this database instance in Kubernetes.INSTANCE_ENDPOINT: RDS instance endpoint hostname.AWS_RDS_INSTANCE_ARN: AWS RDS instance ARN for cloud provider integration.SECRET_NAME: Name of the Kubernetes secret containing database credentials.NAMESPACE: Kubernetes namespace where the secret exists.
To see the full set of values, check out the k8s-monitoring Helm chart documentation or the example configuration.
Optional: Configure AWS Secrets Manager and Kubernetes
If you use AWS Secrets Manager with External Secrets Operator to manage database credentials, configure them as follows.
Secret path convention
Store monitoring credentials in AWS Secrets Manager at a path following this convention:
/kubernetes/rds/<INSTANCE_NAME>/monitoringPostgreSQL secret format
Store the secret as JSON with the following format:
{
"username": "db-o11y",
"password": "<DB_O11Y_PASSWORD>",
"engine": "postgres",
"host": "<INSTANCE_ENDPOINT>.rds.amazonaws.com",
"port": 5432,
"dbInstanceIdentifier": "<INSTANCE_NAME>",
"database": "postgres"
}Replace the placeholders:
DB_O11Y_PASSWORD: Password for thedb-o11yPostgreSQL user.INSTANCE_ENDPOINT: RDS instance endpoint hostname.INSTANCE_NAME: RDS instance name.
Create the secret via AWS CLI
aws secretsmanager create-secret \
--name "/kubernetes/rds/<INSTANCE_NAME>/monitoring" \
--description "Alloy monitoring credentials for RDS PostgreSQL instance" \
--secret-string '{"username":"db-o11y","password":"<DB_O11Y_PASSWORD>","engine":"postgres","host":"<INSTANCE_ENDPOINT>.rds.amazonaws.com","port":5432,"dbInstanceIdentifier":"<INSTANCE_NAME>","database":"postgres"}'Kubernetes External Secrets configuration
Use the External Secrets Operator to sync the AWS secret into Kubernetes:
---
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: <INSTANCE_NAME>-db-monitoring-secretstore
spec:
provider:
aws:
service: SecretsManager
region: <AWS_REGION>
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: <INSTANCE_NAME>-db-monitoring-secret
spec:
refreshInterval: 1h
secretStoreRef:
kind: SecretStore
name: <INSTANCE_NAME>-db-monitoring-secretstore
dataFrom:
- extract:
conversionStrategy: Default
decodingStrategy: None
key: /kubernetes/rds/<INSTANCE_NAME>/monitoring
metadataPolicy: None
version: AWSCURRENTReplace the placeholders:
INSTANCE_NAME: RDS instance name.AWS_REGION: AWS region where the secret is stored.



