---
title: "Guides | Grafana Cloud documentation"
description: "Practical workflows for instrumenting agents, debugging conversations, evaluating quality, and optimizing costs with AI Observability."
---

# Guides

Explore practical workflows for getting the most out of AI Observability.

- [Browse and debug conversations](/docs/grafana-cloud/machine-learning/ai-observability/guides/conversations/)  
  Search, filter, and drill into conversations to understand what your agents did, where they failed, and how they performed.
- [Instrument agents with frameworks](/docs/grafana-cloud/machine-learning/ai-observability/guides/instrument-agents/)  
  Use AI Observability framework integrations to automatically capture generations from LangChain, LangGraph, OpenAI Agents, Vercel AI SDK, and other frameworks.
- [Use the agent catalog](/docs/grafana-cloud/machine-learning/ai-observability/guides/agent-catalog/)  
  Monitor agent versions, track tool and prompt changes, and compare performance across agents in the AI Observability agent catalog.
- [Use built-in dashboards](/docs/grafana-cloud/machine-learning/ai-observability/guides/dashboards/)  
  Monitor agent activity, performance, cost, and quality using the AI Observability analytics dashboards.
- [Set up online evaluation](/docs/grafana-cloud/machine-learning/ai-observability/guides/evaluation/)  
  Create evaluators and rules to continuously score agent quality on live production traffic.
- [Optimize cost and performance](/docs/grafana-cloud/machine-learning/ai-observability/guides/cost-optimization/)  
  Use AI Observability data to reduce LLM costs, improve cache efficiency, and tune agent performance.
