Grafana Cloud

Set up AWS Aurora MySQL

Note

Database Observability is currently in public preview. Grafana Labs offers limited support, and breaking changes might occur prior to the feature being made generally available.

Set up Database Observability with Grafana Cloud to collect telemetry from AWS Aurora MySQL clusters using Grafana Alloy. You configure your Aurora cluster and Alloy to forward telemetry to Grafana Cloud.

What you’ll achieve

In this article, you:

  • Configure Aurora MySQL cluster parameter groups for monitoring.
  • Create monitoring users with required privileges.
  • Configure Alloy with the Database Observability components.
  • Forward telemetry to Grafana Cloud.

Before you begin

Review these requirements:

  • Aurora MySQL 8.0 or later.
  • Access to modify Aurora cluster parameter groups.
  • Grafana Alloy deployed and accessible to your Aurora cluster.
  • Network connectivity between Alloy and your Aurora cluster endpoint.

For general MySQL setup concepts, refer to Set up MySQL.

Configure the DB parameter group

Enable Performance Schema and related instrumentation by configuring your Aurora MySQL cluster parameter group. These parameters require a cluster restart to take effect.

Required parameters

ParameterValueNotes
performance_schema1Requires restart
performance-schema-consumer-events-waits-currentONRequires restart
performance_schema_consumer_events_waits_historyONRequires restart
performance_schema_consumer_global_instrumentation1Requires restart
performance_schema_consumer_thread_instrumentation1Requires restart
performance_schema_max_digest_length4096Requires restart
performance_schema_max_sql_text_length4096Requires restart
max_digest_length4096Requires restart

Using the AWS Console

  1. Open the RDS Console and navigate to Parameter groups.
  2. Create a new parameter group or modify an existing one with family aurora-mysql8.0.
  3. Set the parameters listed above.
  4. Apply the parameter group to your Aurora cluster.
  5. Reboot the cluster to apply changes.

For detailed console instructions, refer to Working with parameter groups in the AWS documentation.

Using Terraform

Using Terraform with the terraform-aws-modules/rds-aurora/aws module:

hcl
create_db_parameter_group = true
db_parameter_group_family = "aurora-mysql8.0"
db_parameter_group_parameters = [
  {
    name         = "performance_schema"
    value        = "1"
    apply_method = "pending-reboot"
  },
  {
    name         = "performance-schema-consumer-events-waits-current"
    value        = "ON"
    apply_method = "pending-reboot"
  },
  {
    name         = "performance_schema_consumer_events_waits_history"
    value        = "ON"
    apply_method = "pending-reboot"
  },
  {
    name         = "performance_schema_consumer_global_instrumentation"
    value        = "1"
    apply_method = "pending-reboot"
  },
  {
    name         = "performance_schema_consumer_thread_instrumentation"
    value        = "1"
    apply_method = "pending-reboot"
  },
  {
    name         = "performance_schema_max_digest_length"
    value        = "4096"
    apply_method = "pending-reboot"
  },
  {
    name         = "performance_schema_max_sql_text_length"
    value        = "4096"
    apply_method = "pending-reboot"
  },
  {
    name         = "max_digest_length"
    value        = "4096"
    apply_method = "pending-reboot"
  }
]

Or using a standalone aws_db_parameter_group resource:

hcl
resource "aws_db_parameter_group" "aurora_mysql_monitoring" {
  name   = "<CLUSTER_NAME>-monitoring-params"
  family = "aurora-mysql8.0"

  parameter {
    name         = "performance_schema"
    value        = "1"
    apply_method = "pending-reboot"
  }

  parameter {
    name         = "performance-schema-consumer-events-waits-current"
    value        = "ON"
    apply_method = "pending-reboot"
  }

  parameter {
    name         = "performance_schema_consumer_events_waits_history"
    value        = "ON"
    apply_method = "pending-reboot"
  }

  parameter {
    name         = "performance_schema_consumer_thread_instrumentation"
    value        = "1"
    apply_method = "pending-reboot"
  }

  parameter {
    name         = "performance_schema_max_digest_length"
    value        = "4096"
    apply_method = "pending-reboot"
  }

  parameter {
    name         = "performance_schema_max_sql_text_length"
    value        = "4096"
    apply_method = "pending-reboot"
  }

  parameter {
    name         = "max_digest_length"
    value        = "4096"
    apply_method = "pending-reboot"
  }
}

Replace <CLUSTER_NAME> with your Aurora cluster name.

After applying the parameter group to your cluster, restart the cluster for the changes to take effect.

Create a monitoring user and grant required privileges

Connect to your Aurora MySQL cluster and create the monitoring user:

SQL
CREATE USER 'db-o11y'@'%' IDENTIFIED BY '<DB_O11Y_PASSWORD>';
GRANT PROCESS, REPLICATION CLIENT ON *.* TO 'db-o11y'@'%';
GRANT SELECT ON performance_schema.* TO 'db-o11y'@'%';

Replace <DB_O11Y_PASSWORD> with a secure password for the db-o11y MySQL user.

Disable tracking of monitoring user queries

Prevent tracking of queries executed by the monitoring user itself:

SQL
UPDATE performance_schema.setup_actors SET ENABLED = 'NO', HISTORY = 'NO' WHERE USER = 'db-o11y';

Grant object privileges for detailed data

Grant access to specific schemas when you want detailed information:

SQL
GRANT SELECT, SHOW VIEW ON <SCHEMA_NAME>.* TO 'db-o11y'@'%';

Replace <SCHEMA_NAME> with the name of the schema you want to monitor.

Alternatively, if you’re unsure which specific schemas need access, grant broader read access to all schemas:

SQL
GRANT SELECT, SHOW VIEW ON *.* TO 'db-o11y'@'%';

Grant privileges to auto-enable consumers

Grant update privileges for Performance Schema consumers if you want Alloy to auto-enable them:

SQL
GRANT UPDATE ON performance_schema.setup_consumers TO 'db-o11y'@'%';

Then, enable the Alloy option allow_update_performance_schema_settings as described in the reference documentation of the database_observability.mysql component.

Alternatively, enable consumers manually as described in the Set up MySQL guide.

Verify user privileges

Verify that the user exists and has the expected privileges:

SQL
SHOW GRANTS FOR 'db-o11y'@'%';

Expected output:

+---------------------------------------------------------------------------------------------------------------+
| Grants for db-o11y@%                                                                                          |
+---------------------------------------------------------------------------------------------------------------+
| GRANT PROCESS, REPLICATION CLIENT ON *.* TO `db-o11y`@`%`                                                     |
| GRANT SELECT, SHOW VIEW ON *.* TO `db-o11y`@`%`                                                               |
| GRANT SELECT ON `performance_schema`.* TO `db-o11y`@`%`                                                       |
| GRANT INSERT, UPDATE ON `performance_schema`.`setup_actors` TO `db-o11y`@`%`                                  |
+---------------------------------------------------------------------------------------------------------------+

Verify parameter group settings

Verify that the parameter group settings were applied correctly after restarting the cluster:

SQL
SHOW VARIABLES LIKE 'performance_schema';

Expected result: Value is ON.

SQL
SHOW VARIABLES LIKE 'performance_schema_max_digest_length';

Expected result: Value is 4096.

SQL
SHOW VARIABLES LIKE 'performance_schema_max_sql_text_length';

Expected result: Value is 4096.

SQL
SHOW VARIABLES LIKE 'max_digest_length';

Expected result: Value is 4096.

Run and configure Alloy

Run Alloy and add the Database Observability configuration for your Aurora cluster.

Run the latest Alloy version

Run Alloy version 1.12.0 or later with the --stability.level=public-preview flag for the database_observability.mysql component. Find the latest stable version on Docker Hub.

Add the Aurora MySQL configuration blocks

Note: If you are using an Aurora primary/replica cluster setup, you must configure Grafana Alloy to connect to each instance endpoint individually—not the cluster endpoint. This ensures metrics and logs are correctly correlated with each node, and data is not missed during failovers or topology changes.

Add these blocks to Alloy for Aurora MySQL. Replace <DB_NAME>. Create a local.file with the Data Source Name string, for example, <DB_USER>:<DB_PASSWORD>@tcp(<INSTANCE_ENDPOINT>:<DB_PORT>)/:

Alloy
local.file "mysql_secret_<DB_NAME>" {
  filename  = "/var/lib/alloy/mysql_secret_<DB_NAME>"
  is_secret = true
}

prometheus.exporter.mysql "mysql_<DB_NAME>" {
  data_source_name  = local.file.mysql_secret_<DB_NAME>.content
  enable_collectors = ["perf_schema.eventsstatements"]
}

database_observability.mysql "mysql_<DB_NAME>" {
  data_source_name  = local.file.mysql_secret_<DB_NAME>.content
  forward_to        = [loki.relabel.database_observability_mysql_<DB_NAME>.receiver]
  targets           = prometheus.exporter.mysql.mysql_<DB_NAME>.targets

  cloud_provider {
    aws {
      arn = "<AWS_AURORA_INSTANCE_ARN>"
    }
  }
}

loki.relabel "database_observability_mysql_<DB_NAME>" {
  forward_to = [loki.write.logs_service.receiver]

  // OPTIONAL: add any additional relabeling rules
  // (must be consistent with rules in "discovery.relabel")
  rule {
    target_label = "instance"
    replacement  = "<INSTANCE_LABEL>"
  }
}

discovery.relabel "database_observability_mysql_<DB_NAME>" {
  targets = database_observability.mysql.mysql_<DB_NAME>.targets

  // OPTIONAL: add any additional relabeling rules
  // (must be consistent with rules in "loki.relabel")
  rule {
    target_label = "job"
    replacement  = "integrations/db-o11y"
  }

  rule {
    target_label = "instance"
    replacement  = "<INSTANCE_LABEL>"
  }
  rule {
    target_label = "<CUSTOM_LABEL_1>"
    replacement  = "<CUSTOM_VALUE_1>"
  }
}

prometheus.scrape "database_observability_mysql_<DB_NAME>" {
  targets    = discovery.relabel.database_observability_mysql_<DB_NAME>.output
  forward_to = [prometheus.remote_write.metrics_service.receiver]
}

Replace the placeholders:

  • DB_NAME: Database name Alloy uses in component identifiers (appears in component names and secret filenames).
    • AWS_AURORA_INSTANCE_ARN: The specific AWS Aurora instance ARN for cloud provider integration. Do not use the Cluster ARN.
  • INSTANCE_LABEL: Value that sets the instance label on logs and metrics (optional).
  • Secret file content DSN example: DB_USER:DB_PASSWORD@tcp(INSTANCE_ENDPOINT:DB_PORT)/.
    • DB_USER: Database user Alloy uses to connect (e.g. db-o11y).
    • DB_PASSWORD: Password for the database user.
    • INSTANCE_ENDPOINT: The specific instance endpoint. Do not use the Cluster Endpoint here; doing so will break metric correlation during failovers.
    • DB_PORT: Database port number (default: 3306).

Find more about the options supported by the database_observability.mysql component in the reference documentation.

Add Prometheus and Loki write configuration

Add the Prometheus remote write and Loki write configuration. From Grafana Cloud, open your stack to get the URLs and generate API tokens:

Alloy
prometheus.remote_write "metrics_service" {
  endpoint {
    url = sys.env("GCLOUD_HOSTED_METRICS_URL")

    basic_auth {
      password = sys.env("GCLOUD_RW_API_KEY")
      username = sys.env("GCLOUD_HOSTED_METRICS_ID")
    }
  }
}

loki.write "logs_service" {
  endpoint {
    url = sys.env("GCLOUD_HOSTED_LOGS_URL")

    basic_auth {
      password = sys.env("GCLOUD_RW_API_KEY")
      username = sys.env("GCLOUD_HOSTED_LOGS_ID")
    }
  }
}

Replace the placeholders:

  • GCLOUD_HOSTED_METRICS_URL: Your Grafana Cloud Prometheus remote write URL.
  • GCLOUD_HOSTED_METRICS_ID: Your Grafana Cloud Prometheus instance ID (username).
  • GCLOUD_HOSTED_LOGS_URL: Your Grafana Cloud Loki write URL.
  • GCLOUD_HOSTED_LOGS_ID: Your Grafana Cloud Loki instance ID (username).
  • GCLOUD_RW_API_KEY: Grafana Cloud API token with write permissions.

Run and configure Alloy with the Grafana Kubernetes Monitoring Helm chart

Extend your values.yaml when you use the k8s-monitoring Helm chart and set databaseObservability.enabled to true within the MySQL integration.

YAML
integrations:
  collector: alloy-singleton
  mysql:
    instances:
      - name: <DB_NAME>
        jobLabel: integrations/db-o11y
        exporter:
          enabled: true
          collectors:
            perfSchemaEventsStatements:
              enabled: true
          dataSource:
            host: <INSTANCE_ENDPOINT> # Must be specific instance endpoint
            auth:
              usernameKey: <DB_USERNAME_SECRET_KEY>
              passwordKey: <DB_PASSWORD_SECRET_KEY>
        databaseObservability:
          enabled: true
          allowUpdatePerformanceSchemaSettings: true
          extraConfig: |
            cloud_provider {
              aws {
                arn = "<AWS_AURORA_INSTANCE_ARN>"
              }
            }
        secret:
          create: false
          name: <DB_NAME>
          namespace: mysql
        logs:
          enabled: true
          labelSelectors:
            app.kubernetes.io/instance: <DB_NAME>

Replace the placeholders:

  • DB_NAME: Database name Alloy uses in component identifiers (appears in component names and secrets).
  • INSTANCE_ENDPOINT: The specific instance endpoint. Do not use the Cluster Endpoint here; doing so will break metric correlation during failovers.
  • DB_USERNAME_SECRET_KEY: Kubernetes secret key containing database user.
  • DB_PASSWORD_SECRET_KEY: Kubernetes secret key containing database password.
  • AWS_AURORA_INSTANCE_ARN: The specific AWS Aurora instance ARN.

To see the full set of values, check out the k8s-monitoring Helm chart documentation or the example configuration.

Optional: Configure AWS Secrets Manager and Kubernetes

If you use AWS Secrets Manager with External Secrets Operator to manage database credentials, configure them as follows.

Secret path convention

Store monitoring credentials in AWS Secrets Manager at a path following this convention:

/kubernetes/rds/<CLUSTER_NAME>/monitoring

MySQL secret format

Store the secret as JSON with the following format:

JSON
{
  "username": "db-o11y",
  "password": "<DB_O11Y_PASSWORD>",
  "engine": "mysql",
  "host": "<INSTANCE_ENDPOINT>.rds.amazonaws.com",
  "port": 3306,
  "dbClusterIdentifier": "<CLUSTER_NAME>"
}

Replace the placeholders:

  • DB_O11Y_PASSWORD: Password for the db-o11y MySQL user.
  • INSTANCE_ENDPOINT: The specific instance endpoint. Do not use the Cluster Endpoint here; doing so will break metric correlation during failovers.
  • CLUSTER_NAME: Aurora cluster name.

Create the secret via AWS CLI

Bash
aws secretsmanager create-secret \
  --name "/kubernetes/rds/<CLUSTER_NAME>/monitoring" \
  --description "Alloy monitoring credentials for Aurora MySQL cluster" \
  --secret-string '{"username":"db-o11y","password":"<DB_O11Y_PASSWORD>","engine":"mysql","host":"<INSTANCE_ENDPOINT>.rds.amazonaws.com","port":3306,"dbClusterIdentifier":"<CLUSTER_NAME>"}'

Kubernetes External Secrets configuration

Use the External Secrets Operator to sync the AWS secret into Kubernetes:

YAML
---
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: <CLUSTER_NAME>-db-monitoring-secretstore
spec:
  provider:
    aws:
      service: SecretsManager
      region: <AWS_REGION>
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: <CLUSTER_NAME>-db-monitoring-secret
spec:
  refreshInterval: 1h
  secretStoreRef:
    kind: SecretStore
    name: <CLUSTER_NAME>-db-monitoring-secretstore
  dataFrom:
    - extract:
        conversionStrategy: Default
        decodingStrategy: None
        key: /kubernetes/rds/<CLUSTER_NAME>/monitoring
        metadataPolicy: None
        version: AWSCURRENT

Replace the placeholders:

  • CLUSTER_NAME: Aurora cluster name.
  • AWS_REGION: AWS region where the secret is stored.

Next steps