This is documentation for the next version of Grafana Alloy Documentation. For the latest stable release, go to the latest version.
database_observability.postgres
Public preview: This is a public preview component. Public preview components are subject to breaking changes, and may be replaced with equivalent functionality that cover the same use case. To enable and use a public preview component, you must set the
stability.levelflag topublic-previewor below.
Usage
database_observability.postgres "<LABEL>" {
data_source_name = <DATA_SOURCE_NAME>
forward_to = [<LOKI_RECEIVERS>]
targets = "<TARGET_LIST>"
}Arguments
You can use the following arguments with database_observability.postgres:
[data_source_name]: format must adhere to the pq library standards.
Exports
The following fields are exported and can be referenced by other components:
The following collectors are configurable:
Blocks
You can use the following blocks with database_observability.postgres:
The > symbol indicates deeper levels of nesting.
For example, cloud_provider > aws refers to a aws block defined inside an cloud_provider block.
cloud_provider
The cloud_provider block has no attributes.
It contains zero or more aws blocks.
You use the cloud_provider block to provide information related to the cloud provider that hosts the database under observation.
This information is appended as labels to the collected metrics.
The labels make it easier for you to filter and group your metrics.
aws
The aws block supplies the ARN identifier for the database being monitored.
azure
The azure block supplies the identifying information for the database being monitored.
query_details
query_samples
schema_details
explain_plans
health_check
Logs Collector
The logs collector processes PostgreSQL logs received through the logs_receiver entry point and exports Prometheus metrics for query and server errors.
Exported Receiver
The component exports a logs_receiver entry point that must be fed by log source components.
otelcol.receiver.awscloudwatch+otelcol.exporter.loki- reads CloudWatch Logs (RDS) and forwards to the receiverloki.source.file- reads PostgreSQL log files and forwards to the receiverotelcol.receiver.otlp+otelcol.exporter.loki- receives OTLP logs and forwards to the receiver
Metrics
The logs collector exports the following Prometheus metrics:
Required PostgreSQL Configuration
Supported Log Format
The collector expects PostgreSQL logs with these prefixed fields:
<timestamp>:<remote_host:port>:<user>@<database>:[<pid>]:<line>:<SQLSTATE>:<session_start>:<vtxid>:<txid>:<session_id>:<query><app><severity>: <message>Example log line:
2026-02-02 21:35:40.130 UTC:10.24.155.141(34110):app_user@books_store:[32032]:2:40001:2026-02-02 21:33:19 UTC:25/112:0:693c34cb.2398::psqlERROR: canceling statement due to user requestThis is done by setting the log_line_prefix param to %m:%r:%u@%d:[%p]:%l:%e:%s:%v:%x:%c:%q%a.
Configure the log_line_prefix
Self hosted Postgres server
For the logs collector to work correctly, PostgreSQL must be configured with the following log_line_prefix:
-- Set log format (requires superuser)
ALTER SYSTEM SET log_line_prefix = '%m:%r:%u@%d:[%p]:%l:%e:%s:%v:%x:%c:%q%a';
-- Reload configuration
SELECT pg_reload_conf();AWS RDS
On AWS RDS, you cannot use ALTER SYSTEM commands. Instead, configure log_line_prefix via RDS Parameter Groups:
- Open the AWS RDS Console → Parameter Groups
- Create or modify your parameter group
- Set
log_line_prefixto:%m:%r:%u@%d:[%p]:%l:%e:%s:%v:%x:%c:%q%a - Apply the parameter group to your RDS instance
Note
Ensure CloudWatch Logs export is enabled for
Error logandGeneral login your RDS instance settings.
Check the configuration
Use the following SELECT statement to show and verify the current string format applied to the beginning of each log line.
SHOW log_line_prefix;Historical Log Processing
The logs collector only processes logs with timestamps after the collector’s start time. This prevents re-counting historical logs when the source component replays old entries.
Behavior:
- On startup: Skips logs with timestamps before the collector started
- Relies on the source component features to prevent duplicate log ingestion across restarts
Examples
With loki.source.file
database_observability.postgres "orders_db" {
data_source_name = "postgres://user:pass@localhost:5432/dbname"
forward_to = [loki.relabel.orders_db.receiver]
targets = prometheus.exporter.postgres.orders_db.targets
enable_collectors = ["query_samples", "explain_plans"]
}
prometheus.exporter.postgres "orders_db" {
data_source_name = "postgres://user:pass@localhost:5432/dbname"
enabled_collectors = ["stat_statements"]
}
// Read PostgreSQL log files and forward to logs collector
loki.source.file "postgres_logs" {
targets = [{
__path__ = "/var/log/postgresql/postgresql-*.log",
job = "postgres-logs",
}]
forward_to = [database_observability.postgres.orders_db.logs_receiver]
}
loki.relabel "orders_db" {
forward_to = [loki.write.logs_service.receiver]
rule {
target_label = "job"
replacement = "integrations/db-o11y"
}
rule {
target_label = "instance"
replacement = "orders_db"
}
}
discovery.relabel "orders_db" {
targets = database_observability.postgres.orders_db.targets
rule {
target_label = "job"
replacement = "integrations/db-o11y"
}
rule {
target_label = "instance"
replacement = "orders_db"
}
}
prometheus.scrape "orders_db" {
targets = discovery.relabel.orders_db.targets
job_name = "integrations/db-o11y"
forward_to = [prometheus.remote_write.metrics_service.receiver]
}
prometheus.remote_write "metrics_service" {
endpoint {
url = sys.env("<GRAFANA_CLOUD_HOSTED_METRICS_URL>")
basic_auth {
username = sys.env("<GRAFANA_CLOUD_HOSTED_METRICS_ID>")
password = sys.env("<GRAFANA_CLOUD_RW_API_KEY>")
}
}
}
loki.write "logs_service" {
endpoint {
url = sys.env("<GRAFANA_CLOUD_HOSTED_LOGS_URL>")
basic_auth {
username = sys.env("<GRAFANA_CLOUD_HOSTED_LOGS_ID>")
password = sys.env("<GRAFANA_CLOUD_RW_API_KEY>")
}
}
}Note
Persistent storage: The Alloy data path (
--storage.path) must be persisted across restarts to maintainloki.source.filepositions file.
With otelcol.receiver.awscloudwatch
This requires --stability.level=experimental
// Storage for CloudWatch state persistence
otelcol.storage.file "cloudwatch" {
directory = "/var/lib/alloy/storage"
}
// Fetch logs from CloudWatch
otelcol.receiver.awscloudwatch "rds_logs" {
region = "us-east-1"
storage = otelcol.storage.file.cloudwatch.handler
logs {
poll_interval = "1m"
start_from = "2026-02-01T00:00:00Z" // Set this date to the closest possible to when you want to account logs from
groups {
named {
group_name = "/aws/rds/instance/production-db/postgresql" // Insert your Postgres RDS Cloudwatch log group here
}
}
}
output {
logs = [otelcol.exporter.loki.rds_logs.input]
}
}
// Convert OTLP to Loki format
otelcol.exporter.loki "rds_logs" {
forward_to = [database_observability.postgres.rds_db.logs_receiver]
}
// Database observability component
database_observability.postgres "rds_db" {
data_source_name = "postgres://user:pass@rds-endpoint.region.rds.amazonaws.com:5432/postgres"
targets = prometheus.exporter.postgres.rds_db.targets
forward_to = [loki.relabel.rds_db.receiver]
cloud_provider {
aws {
arn = "arn:aws:rds:us-east-1:123456789012:db:production-db"
}
}
}
prometheus.exporter.postgres "rds_db" {
data_source_name = "postgres://user:pass@rds-endpoint.region.rds.amazonaws.com:5432/postgres"
enabled_collectors = ["stat_statements"]
}
loki.relabel "rds_db" {
forward_to = [loki.write.logs_service.receiver]
rule {
target_label = "job"
replacement = "integrations/db-o11y"
}
rule {
target_label = "instance"
replacement = "rds_db"
}
}
discovery.relabel "rds_db" {
targets = database_observability.postgres.rds_db.targets
rule {
target_label = "job"
replacement = "integrations/db-o11y"
}
rule {
target_label = "instance"
replacement = "rds_db"
}
}
prometheus.scrape "rds_db" {
targets = discovery.relabel.rds_db.targets
job_name = "integrations/db-o11y"
forward_to = [prometheus.remote_write.metrics_service.receiver]
}
prometheus.remote_write "metrics_service" {
endpoint {
url = sys.env("<GRAFANA_CLOUD_HOSTED_METRICS_URL>")
basic_auth {
username = sys.env("<GRAFANA_CLOUD_HOSTED_METRICS_ID>")
password = sys.env("<GRAFANA_CLOUD_RW_API_KEY>")
}
}
}
loki.write "logs_service" {
endpoint {
url = sys.env("<GRAFANA_CLOUD_HOSTED_LOGS_URL>")
basic_auth {
username = sys.env("<GRAFANA_CLOUD_HOSTED_LOGS_ID>")
password = sys.env("<GRAFANA_CLOUD_RW_API_KEY>")
}
}
}AWS Credentials: The otelcol.receiver.awscloudwatch component requires AWS credentials. Configure with:
- Environment variables:
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_REGION - Docker: Mount
~/.awscredentials or pass environment variables - Kubernetes: Use IAM Roles for Service Accounts (IRSA) or Kubernetes secrets
Required IAM permissions:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"logs:FilterLogEvents",
"logs:GetLogEvents",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams"
],
"Resource": "arn:aws:logs:*:*:log-group:/aws/rds/instance/*" // Place your log-group arn(s) here
}]
}Persistent storage: The otelcol.storage.file directory must be persisted across restarts to maintain CloudWatch log stream positions.
Replace the following:
<GRAFANA_CLOUD_HOSTED_METRICS_URL>: The URL for your Grafana Cloud hosted metrics.<GRAFANA_CLOUD_HOSTED_METRICS_ID>: The user ID for your Grafana Cloud hosted metrics.<GRAFANA_CLOUD_RW_API_KEY>: Your Grafana Cloud API key.<GRAFANA_CLOUD_HOSTED_LOGS_URL>: The URL for your Grafana Cloud hosted logs.<GRAFANA_CLOUD_HOSTED_LOGS_ID>: The user ID for your Grafana Cloud hosted logs.
Compatible components
database_observability.postgres can accept arguments from the following components:
- Components that export Targets
- Components that export Loki
LogsReceiver
database_observability.postgres has exports that can be consumed by the following components:
- Components that consume Targets
- Components that consume Loki
LogsReceiver
Note
Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details.



