Grafana Cloud

Set up Azure Database for PostgreSQL

Set up Database Observability with Grafana Cloud to collect telemetry from Azure Database for PostgreSQL Flexible Server using Grafana Alloy. You configure your Azure PostgreSQL server and Alloy to forward telemetry to Grafana Cloud.

What you’ll achieve

In this article, you:

  • Configure Azure PostgreSQL server parameters for monitoring.
  • Create monitoring users with required privileges.
  • Configure Alloy with the Database Observability components.
  • Forward telemetry to Grafana Cloud.

Before you begin

Review these requirements:

  • Azure Database for PostgreSQL Flexible Server 14.0 or later.
  • Access to modify server parameters.
  • Grafana Alloy deployed and accessible to your Azure PostgreSQL server.
  • Network connectivity between Alloy and your Azure PostgreSQL server endpoint.

For general PostgreSQL setup concepts, refer to Set up PostgreSQL.

Note

Alloy should connect directly to the database host. Avoid connecting Alloy to the database through a load balancer or connection pooler such as PgBouncer as it would limit Alloy’s ability to collect accurate telemetry.

Configure server parameters

Enable pg_stat_statements and configure query tracking by adding server parameters to your Azure Database for PostgreSQL Flexible Server. These parameters require a server restart to take effect.

Required server parameters

ParameterValueNotes
shared_preload_librariespg_stat_statementsRequires restart
pg_stat_statements.trackallRequires restart
track_activity_query_size4096Requires restart

Use the Azure portal

  1. Open the Azure Portal and navigate to Azure Database for PostgreSQL flexible servers.
  2. Select your PostgreSQL flexible server.
  3. In the left menu under Settings, select Server parameters.
  4. Search for and configure each parameter listed above.
  5. Click Save to apply the changes.
  6. Some parameters require a server restart. Navigate to Overview and click Restart if prompted.

For detailed portal instructions, refer to Configure server parameters in the Azure documentation.

Use Terraform

Using Terraform with azurerm_postgresql_flexible_server_configuration:

hcl
resource "azurerm_postgresql_flexible_server_configuration" "shared_preload_libraries" {
  name      = "shared_preload_libraries"
  server_id = azurerm_postgresql_flexible_server.main.id
  value     = "pg_stat_statements"
}

resource "azurerm_postgresql_flexible_server_configuration" "pg_stat_statements_track" {
  name      = "pg_stat_statements.track"
  server_id = azurerm_postgresql_flexible_server.main.id
  value     = "all"
}

resource "azurerm_postgresql_flexible_server_configuration" "track_activity_query_size" {
  name      = "track_activity_query_size"
  server_id = azurerm_postgresql_flexible_server.main.id
  value     = "4096"
}

Alternatively, configure parameters using the Azure CLI:

Bash
az postgres flexible-server parameter set \
  --resource-group <RESOURCE_GROUP> \
  --server-name <SERVER_NAME> \
  --name shared_preload_libraries \
  --value pg_stat_statements

az postgres flexible-server parameter set \
  --resource-group <RESOURCE_GROUP> \
  --server-name <SERVER_NAME> \
  --name pg_stat_statements.track \
  --value all

az postgres flexible-server parameter set \
  --resource-group <RESOURCE_GROUP> \
  --server-name <SERVER_NAME> \
  --name track_activity_query_size \
  --value 4096

Replace the placeholders:

  • RESOURCE_GROUP: Azure resource group name.
  • SERVER_NAME: Azure PostgreSQL Flexible Server name.

Note

The shared_preload_libraries parameter requires a server restart. Restart the server after applying the change.

After the server restarts, enable the extension in each database you want to monitor:

SQL
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

Verify the extension is installed:

SQL
SELECT * FROM pg_stat_statements LIMIT 1;

Create a monitoring user and grant required privileges

Connect to your Azure PostgreSQL Flexible Server as an administrator and create the monitoring user:

Create the db-o11y user and grant base privileges:

SQL
CREATE USER "db-o11y" WITH PASSWORD '<DB_O11Y_PASSWORD>';
GRANT pg_monitor TO "db-o11y";
GRANT pg_read_all_stats TO "db-o11y";

Replace <DB_O11Y_PASSWORD> with a secure password for the db-o11y PostgreSQL user.

Verify that the user has the correct privileges to query pg_stat_statements:

SQL
-- run with the `db-o11y` user
SELECT * FROM pg_stat_statements LIMIT 1;

Disable tracking of monitoring user queries

Prevent tracking of queries executed by the monitoring user itself:

SQL
ALTER ROLE "db-o11y" SET pg_stat_statements.track = 'none';

Grant object privileges for detailed data

To allow collecting schema details and table information, connect to each logical database and grant access to each schema.

For example, for a payments database:

SQL
-- switch to the 'payments' database
\c payments

-- grant permissions in the 'public' schema
GRANT USAGE ON SCHEMA public TO "db-o11y";
GRANT SELECT ON ALL TABLES IN SCHEMA public TO "db-o11y";

-- grant permissions in the 'tests' schema
GRANT USAGE ON SCHEMA tests TO "db-o11y";
GRANT SELECT ON ALL TABLES IN SCHEMA tests TO "db-o11y";

Alternatively, if you’re unsure which specific schemas need access, use the predefined role to grant USAGE and SELECT access to all objects:

SQL
GRANT pg_read_all_data TO "db-o11y";

Verify server parameter settings

Verify that the parameter settings were applied correctly after restarting:

SQL
SHOW pg_stat_statements.track;

Expected result: Value is all.

SQL
SHOW track_activity_query_size;

Expected result: Value is 4096.

Run and configure Alloy

Run Alloy and add the Database Observability configuration for your Azure PostgreSQL server.

Run the latest Alloy version

Alloy 1.15.0 or later is required for Database Observability. Find the latest stable version on Docker Hub. To update, refer to the Alloy release notes.

Add the Azure PostgreSQL configuration blocks

Add these blocks to Alloy for Azure Database for PostgreSQL. Replace <DB_NAME>. Create a local.file with the Data Source Name string, for example, "postgresql://<DB_USER>:<DB_PASSWORD>@<SERVER_FQDN>:<DB_PORT>/<DB_DATABASE>?sslmode=require":

Alloy
local.file "postgres_secret_<DB_NAME>" {
  filename  = "/var/lib/alloy/postgres_secret_<DB_NAME>"
  is_secret = true
}

prometheus.exporter.postgres "postgres_<DB_NAME>" {
  data_source_names  = [local.file.postgres_secret_<DB_NAME>.content]
  enabled_collectors = ["stat_statements"]

  autodiscovery {
    enabled = true
  }
}

database_observability.postgres "postgres_<DB_NAME>" {
  data_source_name  = local.file.postgres_secret_<DB_NAME>.content
  forward_to        = [loki.relabel.database_observability_postgres_<DB_NAME>.receiver]
  targets           = prometheus.exporter.postgres.postgres_<DB_NAME>.targets
  enable_collectors = ["query_details", "query_samples", "schema_details", "explain_plans"]
  exclude_databases = ["azure_sys", "azure_maintenance"]

  cloud_provider {
    azure {
      resource_group  = "<AZURE_RESOURCE_GROUP>"
      subscription_id = "<AZURE_SUBSCRIPTION_ID>"
      server_name     = "<AZURE_SERVER_NAME>"
    }
  }
}

loki.relabel "database_observability_postgres_<DB_NAME>" {
  forward_to = [loki.write.logs_service.receiver]

  rule {
    target_label = "instance"
    replacement  = "<INSTANCE_LABEL>"
  }
}

discovery.relabel "database_observability_postgres_<DB_NAME>" {
  targets = database_observability.postgres.postgres_<DB_NAME>.targets

  rule {
    target_label = "job"
    replacement  = "integrations/db-o11y"
  }

  // OPTIONAL: relabel `instance` to `dsn` before overwriting `instance`;
  // the `dsn` label is used in the integration with the knowledge graph
  rule {
    source_labels = ["instance"]
    target_label  = "dsn"
  }
  rule {
    target_label = "instance"
    replacement  = "<INSTANCE_LABEL>"
  }
}

prometheus.scrape "database_observability_postgres_<DB_NAME>" {
  targets    = discovery.relabel.database_observability_postgres_<DB_NAME>.output
  forward_to = [prometheus.remote_write.metrics_service.receiver]
}

Replace the placeholders:

  • DB_NAME: Database name Alloy uses in component identifiers (appears in component names and secret filenames).
  • AZURE_RESOURCE_GROUP: Azure Resource Group for your PostgreSQL Flexible Server.
  • AZURE_SUBSCRIPTION_ID: Azure Subscription ID for your PostgreSQL Flexible Server.
  • AZURE_SERVER_NAME: Azure Server Name for your PostgreSQL Flexible Server (optional).
  • INSTANCE_LABEL: Value that sets the instance label on logs and metrics (optional).
  • Secret file content example: "postgresql://DB_USER:DB_PASSWORD@SERVER_FQDN:DB_PORT/DB_DATABASE?sslmode=require".
    • DB_USER: Database user Alloy uses to connect (for example, db-o11y).
    • DB_PASSWORD: Password for the database user.
    • SERVER_FQDN: Azure PostgreSQL server fully qualified domain name (for example, <SERVER_NAME>.postgres.database.azure.com).
    • DB_PORT: Database port number (default: 5432).
    • DB_DATABASE: Logical database name in the DSN (recommend: use postgres).

Add processing of PostgreSQL logs (optional)

Add processing of PostgreSQL logs to gather detailed metrics about query and server errors.

The logs collector processes PostgreSQL logs received through the logs_receiver entry point and exports Prometheus metrics for query and server errors.

Configure log_line_prefix

Configure log_line_prefix via Azure server parameters:

  1. Open the Azure Portal and navigate to Azure Database for PostgreSQL flexible servers.
  2. Select your PostgreSQL flexible server.
  3. In the left menu under Settings, select Server parameters.
  4. Search for log_line_prefix and set it to: %m:%r:%u@%d:[%p]:%l:%e:%s:%v:%x:%c:%q%a:
  5. Click Save to apply the changes.

Alternatively, use the Azure CLI:

Bash
az postgres flexible-server parameter set \
  --resource-group <RESOURCE_GROUP> \
  --server-name <SERVER_NAME> \
  --name log_line_prefix \
  --value '%m:%r:%u@%d:[%p]:%l:%e:%s:%v:%x:%c:%q%a:'

Note

Ensure Azure Diagnostic Settings are configured to export PostgreSQL logs (PostgreSQLLogs category) to an Azure Event Hub.

Azure Event Hubs credentials

The loki.source.azure_event_hubs component requires an Event Hub connection string and a fully qualified namespace. Configure with:

  • Environment variables passed to the Alloy container
  • Kubernetes secrets mounted as environment variables
  • Azure Managed Identity (when running in Azure Kubernetes Service)

The Event Hub shared access policy must have at least Listen permission. You can find the connection string in the Azure Portal under your Event Hub namespace > Shared access policies.

Add logs processing configuration

Add the logs processing configuration block with loki.source.azure_event_hubs. Replace <DB_NAME> (matching the database name used in the main configuration above), <AZURE_EVENTHUB_NAMESPACE>, <AZURE_EVENTHUB_NAME>, <AZURE_EVENTHUB_CONNECTION_STRING>, and <INSTANCE_LABEL>:

Alloy
// Read logs from Azure Event Hubs
loki.source.azure_event_hubs "postgres_logs_<DB_NAME>" {
  fully_qualified_namespace = "<AZURE_EVENTHUB_NAMESPACE>"
  event_hubs                = ["<AZURE_EVENTHUB_NAME>"]

  authentication {
    mechanism         = "connection_string"
    connection_string = "<AZURE_EVENTHUB_CONNECTION_STRING>"
  }

  forward_to = [loki.process.postgres_azure_logs_<DB_NAME>.receiver]
}

// Parse the Azure JSON log envelope and extract the PostgreSQL message
loki.process "postgres_azure_logs_<DB_NAME>" {
  stage.json {
    expressions = {
      "category"   = "category",
      "pg_message" = "properties.message",
    }
  }

  stage.output {
    source = "pg_message"
  }

  stage.static_labels {
    values = {
      "instance" = "<INSTANCE_LABEL>",
      "job"      = "integrations/db-o11y",
    }
  }

  forward_to = [database_observability.postgres.postgres_<DB_NAME>.logs_receiver]
}

Replace the placeholders:

  • DB_NAME: Database name Alloy uses in component identifiers (must match the <DB_NAME> used in the main configuration above).
  • AZURE_EVENTHUB_NAMESPACE: Fully qualified Event Hub namespace (for example, my-namespace.servicebus.windows.net).
  • AZURE_EVENTHUB_NAME: Name of the Event Hub that receives PostgreSQL diagnostic logs.
  • AZURE_EVENTHUB_CONNECTION_STRING: Connection string for the Event Hub shared access policy with Listen permission.
  • INSTANCE_LABEL: Value that sets the instance label on logs (must match the <INSTANCE_LABEL> used in the main configuration above).

Historical Log Processing

The logs collector only processes logs with timestamps after the collector’s start time. This prevents re-counting historical logs when the source component replays old entries.

Behavior:

  • On startup: Skips logs with timestamps before the collector started
  • Relies on the source component features to prevent duplicate log ingestion across restarts

Add Prometheus and Loki write configuration

Add the Prometheus remote write and Loki write configuration. From Grafana Cloud, open your stack to get the URLs and generate API tokens:

Alloy
prometheus.remote_write "metrics_service" {
  endpoint {
    url = sys.env("GCLOUD_HOSTED_METRICS_URL")

    basic_auth {
      password = sys.env("GCLOUD_RW_API_KEY")
      username = sys.env("GCLOUD_HOSTED_METRICS_ID")
    }
  }
}

loki.write "logs_service" {
  endpoint {
    url = sys.env("GCLOUD_HOSTED_LOGS_URL")

    basic_auth {
      password = sys.env("GCLOUD_RW_API_KEY")
      username = sys.env("GCLOUD_HOSTED_LOGS_ID")
    }
  }
}

Replace the placeholders:

  • GCLOUD_HOSTED_METRICS_URL: Your Grafana Cloud Prometheus remote write URL.
  • GCLOUD_HOSTED_METRICS_ID: Your Grafana Cloud Prometheus instance ID (username).
  • GCLOUD_HOSTED_LOGS_URL: Your Grafana Cloud Loki write URL.
  • GCLOUD_HOSTED_LOGS_ID: Your Grafana Cloud Loki instance ID (username).
  • GCLOUD_RW_API_KEY: Grafana Cloud API token with write permissions.

Run and configure Alloy with the Grafana Kubernetes Monitoring Helm chart

Extend your values.yaml when you use the k8s-monitoring Helm chart and set databaseObservability.enabled to true within the PostgreSQL integration.

YAML
integrations:
  collector: alloy-singleton
  postgresql:
    instances:
      - name: <INSTANCE_NAME>
        exporter:
          dataSource:
            host: <SERVER_FQDN>
            port: 5432
            database: postgres
            sslmode: require
            auth:
              usernameKey: username
              passwordKey: password
          collectors:
            statStatements: true
        databaseObservability:
          enabled: true
          extraConfig: |
            exclude_databases = ["azure_sys", "azure_maintenance"]
            cloud_provider {
              azure {
                resource_group  = "<AZURE_RESOURCE_GROUP>"
                subscription_id = "<AZURE_SUBSCRIPTION_ID>"
                server_name     = "<AZURE_SERVER_NAME>"
              }
            }
          collectors:
            queryDetails:
              enabled: true
            querySamples:
              enabled: true
            schemaDetails:
              enabled: true
            explainPlans:
              enabled: true
        secret:
          create: false
          name: <SECRET_NAME>
          namespace: <NAMESPACE>
        logs:
          enabled: true
          labelSelectors:
            app.kubernetes.io/instance: <INSTANCE_NAME>

Replace the placeholders:

  • INSTANCE_NAME: Name for this database instance in Kubernetes.
  • SERVER_FQDN: Azure PostgreSQL server fully qualified domain name.
  • AZURE_RESOURCE_ID: Azure resource ID for your PostgreSQL Flexible Server.
  • SECRET_NAME: Name of the Kubernetes secret containing database credentials.
  • NAMESPACE: Kubernetes namespace where the secret exists.

To see the full set of values, check out the k8s-monitoring Helm chart documentation or the example configuration.

Optional: Configure Azure Key Vault and Kubernetes

If you use Azure Key Vault with External Secrets Operator to manage database credentials, configure them as follows.

Secret naming convention

Store monitoring credentials in Azure Key Vault with a name following this convention:

postgres-<SERVER_NAME>-monitoring

PostgreSQL secret format

Store the secret as JSON with the following format:

JSON
{
  "username": "db-o11y",
  "password": "<DB_O11Y_PASSWORD>",
  "host": "<SERVER_FQDN>",
  "port": 5432,
  "database": "postgres"
}

Replace the placeholders:

  • DB_O11Y_PASSWORD: Password for the db-o11y PostgreSQL user.
  • SERVER_FQDN: Azure PostgreSQL server fully qualified domain name.

Create the secret via Azure CLI

Bash
az keyvault secret set \
  --vault-name <KEY_VAULT_NAME> \
  --name "postgres-<SERVER_NAME>-monitoring" \
  --value '{"username":"db-o11y","password":"<DB_O11Y_PASSWORD>","host":"<SERVER_FQDN>","port":5432,"database":"postgres"}'

Kubernetes External Secrets configuration

Use the External Secrets Operator to sync the Azure secret into Kubernetes:

YAML
---
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: <SERVER_NAME>-db-monitoring-secretstore
spec:
  provider:
    azurekv:
      tenantId: <AZURE_TENANT_ID>
      vaultUrl: https://<KEY_VAULT_NAME>.vault.azure.net
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: <SERVER_NAME>-db-monitoring-secret
spec:
  refreshInterval: 1h
  secretStoreRef:
    kind: SecretStore
    name: <SERVER_NAME>-db-monitoring-secretstore
  dataFrom:
    - extract:
        key: postgres-<SERVER_NAME>-monitoring

Replace the placeholders:

  • SERVER_NAME: Azure PostgreSQL server name.
  • AZURE_TENANT_ID: Azure tenant ID.
  • KEY_VAULT_NAME: Azure Key Vault name.

Next steps

For an overview of key concepts, refer to Introduction to Database Observability.

For troubleshooting during setup, refer to Troubleshoot.