Build your alert query

The query is the foundation of your alert rule—it defines what data Grafana monitors. You’ll use the query builder to construct a query that fetches the metric or log data you identified earlier.

To build your alert query, complete the following steps:

  1. In Grafana Cloud, navigate to Alerts & IRM > Alerting > Alert rules from the main menu.

  2. Click + New alert rule in the upper right.

    The alert rule creation form opens with sections for defining your query, conditions, evaluation behavior, and notifications.

  3. In the Define query and alert condition section, locate the query editor.

    The default data source is pre-selected. Change it if needed to match where your data is stored (Prometheus/Mimir for metrics, Loki for logs).

  4. Select the data source for your alert:

    If you’re alerting onSelect
    Infrastructure metricsYour Prometheus or Mimir data source
    LogsYour Loki data source
  5. Build your query using the query builder:

    For metrics (Prometheus/Mimir):

    • Click Metric and select or search for your metric name.

      For example, select node_cpu_seconds_total for CPU monitoring.

    • Add label filters to narrow the scope.

      For example, add mode != "idle" to exclude idle CPU time.

    • Apply functions as needed.

      For example, wrap with rate() and avg by (instance)() to get average CPU usage per host.

    For logs (Loki):

    • Enter a log query using LogQL.

      For example, {job="myapp"} |= "error" to find error logs.

    • Use metric queries for rate-based alerts.

      For example, rate({job="myapp"} |= "error" [5m]) for error rate.

  6. Click Run queries to preview the data your query returns.

    Verify that the results match what you expect to monitor. You should see data points for each dimension (host, service, etc.) you want to alert on.

  7. If your query returns multiple time series (multi-dimensional), verify that each series is labeled distinctly.

    If alerting onMulti-dimensional example
    MetricsCPU across 5 hosts → 5 series, each with unique instance label
    LogsErrors across 3 services → 3 series, each with unique job or service label

In the next milestone, you’ll set the conditions that determine when this alert fires.

More to explore (optional)

At this point in your journey, you can explore the following paths:

Query and transform data

Grafana-managed alert rules


page 5 of 11