GenAI observability setup
GenAI Observability provides comprehensive monitoring for Large Language Model (LLM) applications including performance metrics, token usage tracking, cost analysis, and user interaction patterns.
Install the instrumentation SDK
Install the OpenLIT SDK in your Python environment:
pip install openlit
Configure OTEL environment variables
Get your Grafana Cloud OTEL credentials
If you haven’t already obtained your OTEL credentials during the main AI Observability setup, follow these steps:
- Sign in to Grafana Cloud and go to the Grafana Cloud Portal
- Select your organization if you have access to multiple
- Click your stack from the sidebar or main stack list
- Under Manage your stack, click the Configure button in the OpenTelemetry section
- Scroll down to the Password / API Token section and click Generate now (if you don’t have a token)
- Enter a name for the token and click Create token
- Click Close - you don’t need to copy the token manually
- Scroll down and copy the
OTEL_EXPORTER_OTLP_ENDPOINT
andOTEL_EXPORTER_OTLP_HEADERS
values from the Environment variables section
Set the environment variables
Set up the OpenTelemetry endpoints using the values you copied:
export OTEL_EXPORTER_OTLP_ENDPOINT="<YOUR_GRAFANA_OTEL_GATEWAY_URL>"
export OTEL_EXPORTER_OTLP_HEADERS="<YOUR_GRAFANA_OTEL_GATEWAY_AUTH>"
Replace the values with those you copied:
- Replace
<YOUR_GRAFANA_OTEL_GATEWAY_URL>
with theOTEL_EXPORTER_OTLP_ENDPOINT
value
Example:https://otlp-gateway-<ZONE>.grafana.net/otlp
- Replace
<YOUR_GRAFANA_OTEL_GATEWAY_AUTH>
with theOTEL_EXPORTER_OTLP_HEADERS
value
Example:Authorization=Basic%20<BASE64 ENCODED INSTANCE ID AND API TOKEN>
Instrument your application
Add the following two lines to your application code:
import openlit
openlit.init()
The OpenLIT SDK automatically use the OTEL environment variables to send telemetry data to your Grafana Cloud instance.
Examples by LLM provider
OpenAI
from openai import OpenAI
import openlit
openlit.init()
client = OpenAI(
api_key="YOUR_OPENAI_KEY"
)
chat_completion = client.chat.completions.create(
messages=[
{
"role": "user",
"content": "What is LLM Observability?",
}
],
model="gpt-3.5-turbo",
)
Anthropic
import anthropic
import openlit
openlit.init()
client = anthropic.Anthropic(
api_key="YOUR_ANTHROPIC_KEY"
)
message = client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=1000,
messages=[
{"role": "user", "content": "What is AI Observability?"}
]
)
Refer to the OpenLIT documentation for more advanced configurations and additional LLM providers.
Visualize and analyze
With the LLM Observability data now being collected and sent to Grafana Cloud, the next step is to visualize and analyze this data to get insights into your LLM application’s performance, behavior, and identify areas of improvement.
Navigate to the GenAI Observability dashboard in your Grafana Cloud instance to start exploring. The dashboard provides:
- Request monitoring - Volume, success rates, and response times
- Cost analysis - Real-time spend tracking and optimization insights
- Token optimization - Usage patterns and efficiency metrics
- Performance analytics - Model comparisons and trend analysis