Set up MySQL
Note
Database Observability is currently in public preview. Grafana Labs offers limited support, and breaking changes might occur prior to the feature being made generally available.
Set up Database Observability with Grafana Cloud to collect telemetry from MySQL using Grafana Alloy. You configure your database and Alloy to forward telemetry to Grafana Cloud.
What you’ll achieve
In this article, you:
- Configure MySQL for monitoring.
- Run Alloy with the public preview Database Observability components.
- Forward telemetry to Grafana Cloud.
Before you begin
Review these requirements:
- Use MySQL version 8.0 or later.
Set up the MySQL database
Prepare MySQL for monitoring and query introspection.
Create a monitoring user and grant required privileges
Create the db-o11y user and grant base privileges:
CREATE USER 'db-o11y'@'%' IDENTIFIED by '<DB_O11Y_PASSWORD>';
GRANT PROCESS, REPLICATION CLIENT ON *.* TO 'db-o11y'@'%';
GRANT SELECT ON performance_schema.* TO 'db-o11y'@'%';Replace <DB_O11Y_PASSWORD> with the password for the db-o11y MySQL user.
Grant object privileges for detailed data
Grant access to specific schemas when you want detailed information:
GRANT SELECT, SHOW VIEW ON payments.* TO 'db-o11y'@'%';Alternatively, grant access to all schemas:
GRANT SELECT, SHOW VIEW ON *.* TO 'db-o11y'@'%';Verify user privileges
Verify that the user exists and has the expected privileges:
SHOW GRANTS FOR 'db-o11y'@'%';
+-------------------------------------------------------------------+
| Grants for db-o11y@% |
+-------------------------------------------------------------------+
| GRANT PROCESS, REPLICATION CLIENT ON *.* TO `db-o11y`@`%` |
| GRANT SELECT, SHOW VIEW ON *.* TO `db-o11y`@`%` |
+-------------------------------------------------------------------+Enable Performance Schema
Enable Performance Schema and verify the setting:
SHOW VARIABLES LIKE 'performance_schema';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| performance_schema | ON |
+--------------------+-------+Increase digest length limits
Increase max_digest_length and verify the setting:
SHOW VARIABLES LIKE 'max_digest_length';
+-------------------+-------+
| Variable_name | Value |
+-------------------+-------+
| max_digest_length | 4096 |
+-------------------+-------+Increase performance_schema_max_digest_length and verify the setting:
SHOW VARIABLES LIKE 'performance_schema_max_digest_length';
+--------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------+-------+
| performance_schema_max_digest_length | 4096 |
+--------------------------------------+-------+Increase unredacted SQL text length limits (optional)
Increase performance_schema_max_sql_text_length to collect actual SQL text when you disable query redaction later. Verify the setting:
SHOW VARIABLES LIKE 'performance_schema_max_sql_text_length';
+----------------------------------------+-------+
| Variable_name | Value |
+----------------------------------------+-------+
| performance_schema_max_sql_text_length | 4096 |
+----------------------------------------+-------+Enable CPU consumer for samples (optional)
Check whether the events_statements_cpu consumer is enabled:
SELECT * FROM performance_schema.setup_consumers WHERE NAME = 'events_statements_cpu';Enable the consumer if it’s disabled:
UPDATE performance_schema.setup_consumers SET ENABLED = 'YES' WHERE NAME = 'events_statements_cpu';The events_statements_cpu consumer disables when the database restarts. If you want Alloy to verify and enable the consumer on your behalf, extend the user grants and configuration and enable the allow_update_performance_schema_settings argument:
GRANT UPDATE ON performance_schema.setup_consumers TO 'db-o11y'@'%';Enable wait event consumers (optional)
Enable wait event consumers when you want to collect wait events together with query samples.
Check whether the events_waits_current and events_waits_history consumers are enabled:
SELECT * FROM performance_schema.setup_consumers WHERE NAME IN ('events_waits_current', 'events_waits_history');Enable the consumers if they’re disabled:
UPDATE performance_schema.setup_consumers SET ENABLED = 'YES' WHERE NAME IN ('events_waits_current', 'events_waits_history');These consumers disable when the database restarts. If you want Alloy to verify and enable the consumers on your behalf, follow the instructions in the previous “Enable CPU consumer for samples” section to configure auto_enable_setup_consumers.
Run and configure Alloy
Run Alloy and add the Database Observability configuration.
Run the latest Alloy version
Run Alloy version 1.11 or later with the --stability.level=experimental flag for the database_observability component. Find the latest stable version on Docker Hub.
Add the MySQL configuration blocks
Add these blocks to Alloy. Replace <DB_NAME>. Create a local.file with the Data Source Name string, for example, "<DB_USER>:<DB_PASSWORD>@(<DB_HOST>:<DB_PORT>)/:
local.file "mysql_secret_<DB_NAME>" {
filename = "/var/lib/alloy/mysql_secret_<DB_NAME>"
is_secret = true
}
prometheus.exporter.mysql "integrations_mysqld_exporter_<DB_NAME>" {
data_source_name = local.file.mysql_secret_<DB_NAME>.content
enable_collectors = ["perf_schema.eventsstatements", "perf_schema.eventswaits"]
}
database_observability.mysql "mysql_<DB_NAME>" {
data_source_name = local.file.mysql_secret_<DB_NAME>.content
forward_to = [loki.relabel.database_observability_mysql_<DB_NAME>.receiver]
targets = prometheus.exporter.mysql.integrations_mysqld_exporter_<DB_NAME>.targets
// OPTIONAL: enable collecting samples of queries with their execution metrics. The SQL text is redacted to hide sensitive params.
enable_collectors = ["query_samples"]
// OPTIONAL: if `query_samples` collector is enabled, you can use
// the following setting to disable SQL text redaction (by default
// query samples are redacted).
query_samples {
disable_query_redaction = true
}
// OPTIONAL: provide additional information specific to the Cloud Provider
// that hosts the database to enable certain infrastructure observability features.
cloud_provider {
aws {
arn = "<AWS_RDS_DB_ARN>"
}
}
}
loki.relabel "database_observability_mysql_<DB_NAME>" {
forward_to = [loki.write.logs_service.receiver]
// OPTIONAL: add any additional relabeling rules; must be consistent with rules in "discovery.relabel"
rule {
target_label = "instance"
replacement = "<INSTANCE_LABEL>"
}
rule {
target_label = "<CUSTOM_LABEL_1>"
replacement = "<CUSTOM_VALUE_1>"
}
}
discovery.relabel "database_observability_mysql_<DB_NAME>" {
targets = database_observability.mysql.mysql_<DB_NAME>.targets
rule {
target_label = "job"
replacement = "integrations/db-o11y"
}
// OPTIONAL: add any additional relabeling rules; must be consistent with rules in "loki.relabel"
rule {
target_label = "instance"
replacement = "<INSTANCE_LABEL>"
}
rule {
target_label = "<CUSTOM_LABEL_1>"
replacement = "<CUSTOM_VALUE_1>"
}
}
prometheus.scrape "database_observability_mysql_<DB_NAME>" {
targets = discovery.relabel.database_observability_mysql_<DB_NAME>.output
job_name = "integrations/db-o11y"
forward_to = [prometheus.remote_write.metrics_service.receiver]
}Replace the placeholders:
DB_NAME: Logical database name Alloy uses in component identifiers (appears in component names and secret filenames).AWS_RDS_DB_ARN: AWS RDS database ARN for cloud provider integration (optional).INSTANCE_LABEL: Value that sets theinstancelabel on logs and metrics (optional).CUSTOM_LABEL_1,CUSTOM_VALUE_1: Optional custom label key and value you attach to logs and metrics.- Secret file content example:
"DB_USER:DB_PASSWORD@(DB_HOST:DB_PORT)/".DB_USER: Database user Alloy uses to connect.DB_PASSWORD: Password for the database user.DB_HOST: Hostname or IP address of the database.DB_PORT: Database port number.
Add Prometheus and Loki write configuration
Add the Prometheus remote write and Loki write configuration. From Grafana Cloud, open your stack to get the URLs and generate API tokens:
prometheus.remote_write "metrics_service" {
endpoint {
url = sys.env("GCLOUD_HOSTED_METRICS_URL")
basic_auth {
password = sys.env("GCLOUD_RW_API_KEY")
username = sys.env("GCLOUD_HOSTED_METRICS_ID")
}
}
}
loki.write "logs_service" {
endpoint {
url = sys.env("GCLOUD_HOSTED_LOGS_URL")
basic_auth {
password = sys.env("GCLOUD_RW_API_KEY")
username = sys.env("GCLOUD_HOSTED_LOGS_ID")
}
}
}Replace the placeholders:
GCLOUD_HOSTED_METRICS_URL: Your Grafana Cloud Prometheus remote write URL.GCLOUD_HOSTED_METRICS_ID: Your Grafana Cloud Prometheus instance ID (username).GCLOUD_HOSTED_LOGS_URL: Your Grafana Cloud Loki write URL.GCLOUD_HOSTED_LOGS_ID: Your Grafana Cloud Loki instance ID (username).GCLOUD_RW_API_KEY: Grafana Cloud API token with write permissions.
Configure the k8s-monitoring Helm chart
Extend your values.yaml when you use the k8s-monitoring Helm chart:
alloy:
image:
repository: 'grafana/alloy'
tag: <ALLOY_VERSION> # e.g. "v1.11.0"
alloy:
stabilityLevel: experimental
extraConfig: |
// Add the config blocks for Database Observability
prometheus.exporter.mysql "integrations_mysqld_exporter_<DB_NAME>" {
...
}
...
database_observability.mysql "mysql_<DB_NAME>" {
...
}Replace the placeholders:
ALLOY_VERSION: Alloy stable release version you use, for example,v1.11.0.DB_NAME: Logical database name Alloy uses in component identifiers.
Configure for multiple databases
This is a complete example of Alloy Database Observability configuration using two different databases:
prometheus.remote_write "metrics_service" {
endpoint {
url = sys.env("GCLOUD_HOSTED_METRICS_URL")
basic_auth {
password = sys.env("GCLOUD_RW_API_KEY")
username = sys.env("GCLOUD_HOSTED_METRICS_ID")
}
}
}
loki.write "logs_service" {
endpoint {
url = sys.env("GCLOUD_HOSTED_LOGS_URL")
basic_auth {
password = sys.env("GCLOUD_RW_API_KEY")
username = sys.env("GCLOUD_HOSTED_LOGS_ID")
}
}
}
local.file "mysql_secret_<DB_NAME_1>" {
filename = "/var/lib/alloy/mysql_secret_<DB_NAME_1>"
is_secret = true
}
prometheus.exporter.mysql "integrations_mysqld_exporter_<DB_NAME_1>" {
data_source_name = local.file.mysql_secret_<DB_NAME_1>.content
enable_collectors = ["perf_schema.eventsstatements", "perf_schema.eventswaits"]
}
database_observability.mysql "mysql_<DB_NAME_1>" {
data_source_name = local.file.mysql_secret_<DB_NAME_1>.content
forward_to = [loki.relabel.database_observability_mysql_<DB_NAME_1>.receiver]
targets = prometheus.exporter.mysql.integrations_mysqld_exporter_<DB_NAME_1>.targets
enable_collectors = ["query_samples"]
}
loki.relabel "database_observability_mysql_<DB_NAME_1>" {
forward_to = [loki.write.logs_service.receiver]
}
discovery.relabel "database_observability_mysql_<DB_NAME_1>" {
targets = database_observability.mysql.mysql_<DB_NAME_1>.targets
rule {
target_label = "job"
replacement = "integrations/db-o11y"
}
}
prometheus.scrape "database_observability_mysql_<DB_NAME_1>" {
targets = discovery.relabel.database_observability_mysql_<DB_NAME_1>.output
job_name = "integrations/db-o11y"
forward_to = [prometheus.remote_write.metrics_service.receiver]
}
local.file "mysql_secret_<DB_NAME_2>" {
filename = "/var/lib/alloy/mysql_secret_<DB_NAME_2>"
is_secret = true
}
prometheus.exporter.mysql "integrations_mysqld_exporter_<DB_NAME_2>" {
data_source_name = local.file.mysql_secret_<DB_NAME_2>.content
enable_collectors = ["perf_schema.eventsstatements", "perf_schema.eventswaits"]
}
database_observability.mysql "mysql_<DB_NAME_2>" {
data_source_name = local.file.mysql_secret_<DB_NAME_2>.content
forward_to = [loki.relabel.database_observability_mysql_<DB_NAME_2>.receiver]
targets = prometheus.exporter.mysql.integrations_mysqld_exporter_<DB_NAME_2>.targets
enable_collectors = ["query_samples"]
}
loki.relabel "database_observability_mysql_<DB_NAME_2>" {
forward_to = [loki.write.logs_service.receiver]
}
discovery.relabel "database_observability_mysql_<DB_NAME_2>" {
targets = database_observability.mysql.mysql_<DB_NAME_2>.targets
rule {
target_label = "job"
replacement = "integrations/db-o11y"
}
}
prometheus.scrape "database_observability_mysql_<DB_NAME_2>" {
targets = discovery.relabel.database_observability_mysql_<DB_NAME_2>.targets
job_name = "integrations/db-o11y"
forward_to = [prometheus.remote_write.metrics_service.receiver]
}Replace the placeholders:
DB_NAME_1: Logical database name Alloy uses in component identifiers and secret filenames for the first database.DB_NAME_2: Logical database name Alloy uses in component identifiers and secret filenames for the second database.- Secret file content example:
"DB_USER:DB_PASSWORD@(DB_HOST:DB_PORT)/".DB_USER: Database user Alloy uses to connect.DB_PASSWORD: Password for the database user.DB_HOST: Hostname or IP address of the database.DB_PORT: Database port number.
- Environment variables you use above:
GCLOUD_HOSTED_METRICS_URL: Your Grafana Cloud Prometheus remote write URL.GCLOUD_HOSTED_METRICS_ID: Your Grafana Cloud Prometheus instance ID (username).GCLOUD_HOSTED_LOGS_URL: Your Grafana Cloud Loki write URL.GCLOUD_HOSTED_LOGS_ID: Your Grafana Cloud Loki instance ID (username).GCLOUD_RW_API_KEY: Grafana Cloud API token with write permissions.
Next steps
For an overview of key concepts, refer to Introduction to Database Observability.



