Manage conversation context
Grafana Assistant keeps conversations grounded in the signals, assets, and memories you reference. This article explains how context flows through a chat so you know when to start fresh, how @
mentions work, and how infrastructure memories influence the results.
Before you begin
- Active chat: Open the Assistant sidebar in Grafana Cloud.
- Data source access: Confirm you can query the data sources or dashboards you plan to mention.
- Infrastructure memories: Optional discovery scans keep context accurate across services and environments.
How conversation context works
Grafana Assistant tracks the entire chat transcript, including prompts, responses, mentions, and memories. The underlying conversation manager injects relevant history into each model request so the Assistant understands what you asked previously, which data sources you used, and which dashboards you referenced. When the topic changes completely, start a new conversation to prevent old context from leaking into the new request.
Use mentions to steer answers
Type @
in the input whenever you want to anchor an answer to a data source, dashboard, panel, or previous chat. The mention pins that resource to your prompt, which lets the Assistant read its configuration, including queries, visualization settings, and metadata, before responding. Because the conversation manager now knows exactly which asset you mean, it limits tool calls to the mentioned resources instead of guessing or drifting into staging environments. Mentioning an investigation or memory works the same way: the Assistant loads the related context so every follow-up stays aligned with the findings you already reviewed.
Load and apply infrastructure memories
Discovery scans store service topology, dependencies, and monitoring hints as infrastructure memories inside Grafana Cloud. Mention a service that exists in memory and the Assistant injects that structured data directly into the prompt so explanations stay grounded in reality. When a conversation covers multiple services, each mention brings along its associated memory, allowing the Assistant to compare systems without losing detail. Refresh the memories after major infrastructure changes so this background knowledge does not drift.
Manage context during long conversations
Summarize the goal in each new prompt. Short reminders such as “Stay focused on checkout latency” help the Assistant ignore earlier, unrelated history. Start a new chat whenever you pivot to a different service or incident so the conversation manager can drop the previous context and keep token usage low. If a response drifts, restate the key details and correct the assumption directly; future answers will follow the updated guidance.
Next steps
- Understand Infrastructure memories to see how they enhance context.
- Explore Collaboration techniques to share context across your team.