---
title: "Grafana AI Observability | Grafana Cloud documentation"
description: "Monitor your AI agents in production with observability for conversations, costs, quality, and performance."
---

# Grafana AI Observability

AI observability for teams running LLM agents in production. Monitor conversations, costs, quality, and performance from a single pane of glass.

* * *

> Note
> 
> Grafana AI Observability is currently in [public preview](/docs/release-life-cycle/). Grafana Labs offers limited support, and breaking changes might occur prior to the feature being made generally available.

## Overview

Grafana AI Observability is built on OpenTelemetry and gives teams running LLM agents in production a single place to monitor agent activity, trace conversations, track costs, and evaluate quality.

AI Observability provides thin SDKs for Go, Python, TypeScript, Java, and .NET that capture generation data with minimal code changes. Built-in framework integrations for LangChain, LangGraph, OpenAI Agents, Vercel AI SDK, and others make instrumentation automatic.

With the Grafana plugin, you can browse conversations, drill into traces, compare agent versions, configure online evaluation rules, and use pre-built dashboards for metrics, logs, traces, and profiles.

## Explore

[Get started  
\
Instrument your agents and deploy AI Observability to start monitoring AI workloads.](get-started/)

[Configure  
\
Tune SDK, deployment, evaluation, and plugin settings.](configure/)

[Guides  
\
Explore practical workflows for conversations, evaluation, dashboards, and cost optimization.](guides/)

[Reference  
\
API contracts, Helm chart values, and SDK configuration reference.](reference/)
