Menu
Grafana Cloud

Answer quick questions with chat

This guide helps you get immediate answers about system health without leaving the Assistant conversation. Start here when you want to validate a hypothesis, explore telemetry, or collect talking points for an incident update.

What you’ll achieve

  • Ask focused questions about metrics, logs, traces, profiles, or SQL data and receive contextual answers.
  • Iterate on the response to refine scope, adjust queries, or request summaries.
  • Capture the findings in a shareable format for teammates or incident notes.

Before you begin

  • Relevant data sources: Prometheus for metrics, Loki for logs, Tempo for traces, Pyroscope for profiles, or SQL data for business signals.
  • Context items: Optional dashboards or panels you want to mention with @ context.

Launch a focused conversation

Open Grafana Assistant, describe the question you need answered, and provide contextual hints so it can target the right data quickly.

  1. Open Grafana and select the sparkle icon to launch Grafana Assistant.
  2. Summarize what you are trying to learn in one sentence. Be explicit about the signal, timeframe, and scope.
  3. Add @ mentions for the specific data source, dashboard, or panel that best represents the system you care about.

Speak in natural language; you do not need slash commands. The clearer the intent, the easier it is for the Assistant to choose the right tool.

Use these prompt starters when you need inspiration:

text
How much CPU are our pods using in @prometheus-datasource?
text
Show me log lines mentioning "timeout" from @application-logs over the last hour.
text
Explain the request latency pattern for the panel I just mentioned.

Iterate on the response

Refine the Assistant’s answer with follow-up prompts, nudging it toward the precise slice of data or summary you need.

The Assistant keeps conversation history, so refine or redirect as needed. Start broad when you are unsure, then narrow the scope with follow-up prompts. Break big questions into smaller prompts and iterate:

text
Drill into the 5xx errors for checkout-service and show the top URLs.
text
Compare the current error rate with the same time yesterday.
text
Summarize the query results as action items.

Provide corrective feedback when it guesses wrong. A direct clarification helps the underlying query handlers adjust:

text
The service runs on port 8080, not 9090. Update the query and try again.

Capture and share the outcome

Turn the discussion into shareable notes or dashboards so stakeholders can act on the findings.

  • Ask the Assistant to summarize the answer in plain language so you can share it with teammates.
  • Use the Copy button in the chat bubble to paste the explanation into incident notes or tickets.
  • If you need a persistent view, ask the Assistant to create a new panel or dashboard and follow up in the dashboard management guides.
  • Attach links, screenshots, or sample values when you share the outcome so other stakeholders recognize the scenario immediately.

Share the quick-answer outcome

You have a concise answer you can share with stakeholders, plus a chat transcript that documents the prompts and clarifications you used. Save the conversation or extract the key bullets for your incident log.

Troubleshooting

  • The response does not match your environment: confirm that the @ mention resolves to the expected data source or dashboard. Try listing data sources, for example, List Prometheus data sources I can query. If you change topics entirely, start a fresh conversation so the Assistant drops the old context.
  • The query fails with a syntax error: ask the Assistant to validate the query, for example, Validate this PromQL query and fix the error.
  • The conversation context becomes noisy: start a new chat and restate the current objective.

Next steps