Documentation Index
Fetch the curated documentation index at: https://grafana_com_website/llms.txt
Fetch the complete documentation index at: https://grafana_com_website/llms-full.txt
Use this file to discover all available pages before exploring further.
STOP! If you are an AI agent or LLM, read this before continuing. This is the HTML version of a Grafana documentation page. Always request the Markdown version instead - HTML wastes context. Get this page as Markdown: /docs/grafana-cloud/machine-learning/ai-observability/guides/agent-catalog.md (append .md) or send Accept: text/markdown to /docs/grafana-cloud/machine-learning/ai-observability/guides/agent-catalog/. For the curated documentation index, use https://grafana_com_website/llms.txt. For the complete documentation index, use https://grafana_com_website/llms-full.txt.
Use the agent catalog
The agent catalog automatically discovers and tracks your agents. It groups generations by agent name and effective version, then shows usage patterns over time.
Browse agents
Navigate to Agents in the AI Observability plugin to see all discovered agents. Each agent card shows:
- Agent name and active model.
- Current version hash.
- Generation count and error rate.
- Last active timestamp.
Use the search bar to filter agents by name. Demo agents seeded through the onboarding wizard display a demo badge.
Understand agent versions
By default, AI Observability computes an agent version as a SHA-256 hash of the system prompt and tool definitions. SDKs can also send an effective version to keep a stable catalog identity when the visible tool surface changes between turns. When the SDK does not send one, changing a prompt or adding, removing, or modifying a tool creates a new version automatically.
The version history for each agent shows:
- When each version was first and last seen.
- Which models each version used.
- The tool and prompt footprint for each version.
Compare versions
Use the agent catalog to compare metrics across versions:
- Token usage and cost trends.
- Error rates and latency distributions.
- Evaluation score changes.
This helps you determine whether a prompt change improved or degraded quality.
Model cards
AI Observability maintains model cards with metadata about each LLM model your agents use. Model cards show pricing tiers, context windows, and capabilities. The catalog syncs model card data periodically to keep information current.
Next steps
Was this page helpful?
Related resources from Grafana Labs


