Set up Azure Database for PostgreSQL
Set up Database Observability with Grafana Cloud to collect telemetry from Azure Database for PostgreSQL Flexible Server using Grafana Alloy. You configure your Azure PostgreSQL server and Alloy to forward telemetry to Grafana Cloud.
What you’ll achieve
In this article, you:
- Configure Azure PostgreSQL server parameters for monitoring.
- Create monitoring users with required privileges.
- Configure Alloy with the Database Observability components.
- Forward telemetry to Grafana Cloud.
Before you begin
Review these requirements:
- Azure Database for PostgreSQL Flexible Server 14.0 or later.
- Access to modify server parameters.
- Grafana Alloy deployed and accessible to your Azure PostgreSQL server.
- Network connectivity between Alloy and your Azure PostgreSQL server endpoint.
For general PostgreSQL setup concepts, refer to Set up PostgreSQL.
Configure server parameters
Enable pg_stat_statements and configure query tracking by adding server parameters to your Azure Database for PostgreSQL Flexible Server. These parameters require a server restart to take effect.
Required server parameters
Use the Azure portal
- Open the Azure Portal and navigate to Azure Database for PostgreSQL flexible servers.
- Select your PostgreSQL flexible server.
- In the left menu under Settings, select Server parameters.
- Search for and configure each parameter listed above.
- Click Save to apply the changes.
- Some parameters require a server restart. Navigate to Overview and click Restart if prompted.
For detailed portal instructions, refer to Configure server parameters in the Azure documentation.
Use Terraform
Using Terraform with azurerm_postgresql_flexible_server_configuration:
resource "azurerm_postgresql_flexible_server_configuration" "shared_preload_libraries" {
name = "shared_preload_libraries"
server_id = azurerm_postgresql_flexible_server.main.id
value = "pg_stat_statements"
}
resource "azurerm_postgresql_flexible_server_configuration" "pg_stat_statements_track" {
name = "pg_stat_statements.track"
server_id = azurerm_postgresql_flexible_server.main.id
value = "all"
}
resource "azurerm_postgresql_flexible_server_configuration" "track_activity_query_size" {
name = "track_activity_query_size"
server_id = azurerm_postgresql_flexible_server.main.id
value = "4096"
}Alternatively, configure parameters using the Azure CLI:
az postgres flexible-server parameter set \
--resource-group <RESOURCE_GROUP> \
--server-name <SERVER_NAME> \
--name shared_preload_libraries \
--value pg_stat_statements
az postgres flexible-server parameter set \
--resource-group <RESOURCE_GROUP> \
--server-name <SERVER_NAME> \
--name pg_stat_statements.track \
--value all
az postgres flexible-server parameter set \
--resource-group <RESOURCE_GROUP> \
--server-name <SERVER_NAME> \
--name track_activity_query_size \
--value 4096Replace the placeholders:
RESOURCE_GROUP: Azure resource group name.SERVER_NAME: Azure PostgreSQL Flexible Server name.
Note
The
shared_preload_librariesparameter requires a server restart. Restart the server after applying the change.
After the server restarts, enable the extension in each database you want to monitor:
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;Verify the extension is installed:
SELECT * FROM pg_stat_statements LIMIT 1;Create a monitoring user and grant required privileges
Connect to your Azure PostgreSQL Flexible Server as an administrator and create the monitoring user:
Disable tracking of monitoring user queries
Grant object privileges for detailed data
Verify server parameter settings
Run and configure Alloy
Run Alloy and add the Database Observability configuration for your Azure PostgreSQL server.
Run the latest Alloy version
Add the Azure PostgreSQL configuration blocks
Add these blocks to Alloy for Azure Database for PostgreSQL. Replace <DB_NAME>. Create a local.file with the Data Source Name string, for example, "postgresql://<DB_USER>:<DB_PASSWORD>@<SERVER_FQDN>:<DB_PORT>/<DB_DATABASE>?sslmode=require":
local.file "postgres_secret_<DB_NAME>" {
filename = "/var/lib/alloy/postgres_secret_<DB_NAME>"
is_secret = true
}
prometheus.exporter.postgres "postgres_<DB_NAME>" {
data_source_names = [local.file.postgres_secret_<DB_NAME>.content]
enabled_collectors = ["stat_statements"]
autodiscovery {
enabled = true
}
}
database_observability.postgres "postgres_<DB_NAME>" {
data_source_name = local.file.postgres_secret_<DB_NAME>.content
forward_to = [loki.relabel.database_observability_postgres_<DB_NAME>.receiver]
targets = prometheus.exporter.postgres.postgres_<DB_NAME>.targets
enable_collectors = ["query_details", "query_samples", "schema_details", "explain_plans"]
exclude_databases = ["azure_sys", "azure_maintenance"]
cloud_provider {
azure {
resource_group = "<AZURE_RESOURCE_GROUP>"
subscription_id = "<AZURE_SUBSCRIPTION_ID>"
server_name = "<AZURE_SERVER_NAME>"
}
}
}
loki.relabel "database_observability_postgres_<DB_NAME>" {
forward_to = [loki.write.logs_service.receiver]
rule {
target_label = "instance"
replacement = "<INSTANCE_LABEL>"
}
}
discovery.relabel "database_observability_postgres_<DB_NAME>" {
targets = database_observability.postgres.postgres_<DB_NAME>.targets
rule {
target_label = "job"
replacement = "integrations/db-o11y"
}
rule {
target_label = "instance"
replacement = "<INSTANCE_LABEL>"
}
}
prometheus.scrape "database_observability_postgres_<DB_NAME>" {
targets = discovery.relabel.database_observability_postgres_<DB_NAME>.output
forward_to = [prometheus.remote_write.metrics_service.receiver]
}Replace the placeholders:
DB_NAME: Database name Alloy uses in component identifiers (appears in component names and secret filenames).AZURE_RESOURCE_GROUP: Azure Resource Group for your PostgreSQL Flexible Server.AZURE_SUBSCRIPTION_ID: Azure Subscription ID for your PostgreSQL Flexible Server.AZURE_SERVER_NAME: Azure Server Name for your PostgreSQL Flexible Server (optional).INSTANCE_LABEL: Value that sets theinstancelabel on logs and metrics (optional).- Secret file content example:
"postgresql://DB_USER:DB_PASSWORD@SERVER_FQDN:DB_PORT/DB_DATABASE?sslmode=require".DB_USER: Database user Alloy uses to connect (for example,db-o11y).DB_PASSWORD: Password for the database user.SERVER_FQDN: Azure PostgreSQL server fully qualified domain name (for example,<SERVER_NAME>.postgres.database.azure.com).DB_PORT: Database port number (default:5432).DB_DATABASE: Logical database name in the DSN (recommend: usepostgres).
Add Prometheus and Loki write configuration
Run and configure Alloy with the Grafana Kubernetes Monitoring Helm chart
Extend your values.yaml when you use the k8s-monitoring Helm chart and set databaseObservability.enabled to true within the PostgreSQL integration.
integrations:
collector: alloy-singleton
postgresql:
instances:
- name: <INSTANCE_NAME>
exporter:
dataSource:
host: <SERVER_FQDN>
port: 5432
database: postgres
sslmode: require
auth:
usernameKey: username
passwordKey: password
collectors:
statStatements: true
databaseObservability:
enabled: true
extraConfig: |
exclude_databases = ["azure_sys", "azure_maintenance"]
cloud_provider {
azure {
resource_group = "<AZURE_RESOURCE_GROUP>"
subscription_id = "<AZURE_SUBSCRIPTION_ID>"
server_name = "<AZURE_SERVER_NAME>"
}
}
collectors:
queryDetails:
enabled: true
querySamples:
enabled: true
schemaDetails:
enabled: true
explainPlans:
enabled: true
secret:
create: false
name: <SECRET_NAME>
namespace: <NAMESPACE>
logs:
enabled: true
labelSelectors:
app.kubernetes.io/instance: <INSTANCE_NAME>Replace the placeholders:
INSTANCE_NAME: Name for this database instance in Kubernetes.SERVER_FQDN: Azure PostgreSQL server fully qualified domain name.AZURE_RESOURCE_ID: Azure resource ID for your PostgreSQL Flexible Server.SECRET_NAME: Name of the Kubernetes secret containing database credentials.NAMESPACE: Kubernetes namespace where the secret exists.
To see the full set of values, check out the k8s-monitoring Helm chart documentation or the example configuration.
Optional: Configure Azure Key Vault and Kubernetes
If you use Azure Key Vault with External Secrets Operator to manage database credentials, configure them as follows.
Secret naming convention
Store monitoring credentials in Azure Key Vault with a name following this convention:
postgres-<SERVER_NAME>-monitoringPostgreSQL secret format
Store the secret as JSON with the following format:
{
"username": "db-o11y",
"password": "<DB_O11Y_PASSWORD>",
"host": "<SERVER_FQDN>",
"port": 5432,
"database": "postgres"
}Replace the placeholders:
DB_O11Y_PASSWORD: Password for thedb-o11yPostgreSQL user.SERVER_FQDN: Azure PostgreSQL server fully qualified domain name.
Create the secret via Azure CLI
az keyvault secret set \
--vault-name <KEY_VAULT_NAME> \
--name "postgres-<SERVER_NAME>-monitoring" \
--value '{"username":"db-o11y","password":"<DB_O11Y_PASSWORD>","host":"<SERVER_FQDN>","port":5432,"database":"postgres"}'Kubernetes External Secrets configuration
Use the External Secrets Operator to sync the Azure secret into Kubernetes:
---
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: <SERVER_NAME>-db-monitoring-secretstore
spec:
provider:
azurekv:
tenantId: <AZURE_TENANT_ID>
vaultUrl: https://<KEY_VAULT_NAME>.vault.azure.net
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: <SERVER_NAME>-db-monitoring-secret
spec:
refreshInterval: 1h
secretStoreRef:
kind: SecretStore
name: <SERVER_NAME>-db-monitoring-secretstore
dataFrom:
- extract:
key: postgres-<SERVER_NAME>-monitoringReplace the placeholders:
SERVER_NAME: Azure PostgreSQL server name.AZURE_TENANT_ID: Azure tenant ID.KEY_VAULT_NAME: Azure Key Vault name.
Next steps
For an overview of key concepts, refer to Introduction to Database Observability.
For troubleshooting during setup, refer to Troubleshoot.



