---
title: "GenAI Observability Configuration | Grafana Cloud documentation"
description: "Configure GenAI Observability for optimal LLM monitoring and cost optimization"
---

# Configuration overview

This documentation covers tracing settings, understanding semantic conventions, and interpreting span attributes to enhance the monitoring and observability of your LLM applications.

## Tracing configuration

OpenLIT provides several configuration options to customize tracing behavior for your GenAI applications. These configurations help you optimize data collection, privacy, and performance.

### Using an existing OpenTelemetry tracer

If you already have an OpenTelemetry tracer configured in your application, you can pass it directly to OpenLIT:

Python ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```python
from opentelemetry import trace
import openlit

# Your existing tracer setup
tracer = trace.get_tracer(__name__)

# Pass the tracer to OpenLIT
openlit.init(tracer=tracer)
```

### Custom resource attributes

Enhance telemetry data with custom resource attributes using the `OTEL_RESOURCE_ATTRIBUTES` environment variable:

Bash ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```bash
export OTEL_RESOURCE_ATTRIBUTES="service.instance.id=my-service-123,k8s.pod.name=ai-app-pod,k8s.namespace.name=production,k8s.node.name=worker-node-1"
```

OpenLIT includes these default resource attributes:

- `telemetry.sdk.name: openlit`
- `service.name: YOUR_SERVICE_NAME`
- `deployment.environment: YOUR_ENVIRONMENT_NAME`

Your custom attributes are added alongside these defaults.

### Privacy and content controls

**Disable content tracing** for privacy compliance:

Python ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```python
import openlit

# Disable logging of prompts and completions
openlit.init(capture_message_content=False)
```

This prevents sensitive prompt and completion data from being included in traces while maintaining performance and cost metrics.

### Performance optimization

**Disable batching** for local development:

Python ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```python
import openlit

# Disable batch span processing for immediate trace delivery
openlit.init(disable_batch=True)
```

**Disable specific instrumentation** to reduce overhead:

Python ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```python
import openlit

# Disable instrumentation for specific providers/frameworks
openlit.init(disabled_instrumentors=["anthropic", "langchain"])
```

### Manual tracing

Add custom traces around your AI workflows:

Python ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```python
import openlit

@openlit.trace
def generate_response(user_query):
    # All LLM calls within this function are automatically grouped
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": user_query}]
    )
    return response

# Or use context manager for more control
def complex_ai_workflow():
    with openlit.start_trace("AI Workflow") as trace:
        # Step 1: Generate embeddings
        embeddings = generate_embeddings(user_input)

        # Step 2: Search vector database
        similar_docs = search_vectordb(embeddings)

        # Step 3: Generate response
        response = generate_llm_response(similar_docs)

        # Set custom metadata and results
        trace.set_result(response)
        trace.set_metadata({
            "docs_found": len(similar_docs),
            "embedding_model": "text-embedding-ada-002"
        })
```

## Metrics configuration

OpenLIT provides automatic instrumentation for collecting OpenTelemetry metrics from your GenAI applications. These metrics complement tracing data and are essential for creating dashboards and monitoring system performance.

### Disable metrics collection

You can disable metrics collection if needed (enabled by default):

Python ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```python
import openlit

# Disable metrics collection
openlit.init(disable_metrics=True)
```

### Using an existing OpenTelemetry meter

If you already have an OpenTelemetry metrics meter configured, you can pass it directly to OpenLIT:

Python ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```python
from opentelemetry import metrics
import openlit

# Your existing meter setup
meter = metrics.get_meter(__name__)

# Pass the meter to OpenLIT
openlit.init(meter=meter)
```

### Add custom resource attributes

Just like with tracing, you can enhance metrics with custom resource attributes using the `OTEL_RESOURCE_ATTRIBUTES` environment variable:

Bash ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```bash
export OTEL_RESOURCE_ATTRIBUTES="service.instance.id=my-service-123,k8s.pod.name=ai-app-pod,k8s.namespace.name=production,k8s.node.name=worker-node-1"
```

OpenLIT includes these default resource attributes for metrics:

- `telemetry.sdk.name: openlit`
- `service.name: YOUR_SERVICE_NAME`
- `deployment.environment: YOUR_ENVIRONMENT_NAME`

## Track cost for custom models

OpenLIT includes built-in pricing information for standard LLM providers, but you can configure custom pricing for specialized models or enterprise pricing agreements.

### Using the default pricing

The default pricing file is recommended for general use as it’s regularly updated with the latest pricing from various LLM providers. This covers standard models from:

- OpenAI (GPT-3.5, GPT-4, etc.)
- Anthropic (Claude models)
- Google (Gemini, PaLM)
- AWS Bedrock models
- Azure OpenAI
- And other major providers

### Using custom pricing

For custom models, fine-tuned models, or enterprise pricing agreements, you can specify your own pricing structure:

**From external URL:**

Python ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```python
import openlit

# Load pricing from a URL (useful for centralized pricing management)
openlit.init(pricing_json="https://your-company.com/ai-pricing.json")
```

**From local file:**

Python ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```python
import openlit

# Load pricing from a local file
openlit.init(pricing_json="/path/to/your/local/pricing.json")
```

#### Pricing JSON structure

Your custom pricing file must follow the same structure as the [default OpenLIT pricing file](https://github.com/openlit/openlit/blob/main/src/openlit/pricing/pricing.json). The JSON should contain pricing details with model names and their associated costs in USD per 1000 tokens.

#### Important considerations

1. **JSON Structure** - The custom pricing JSON must follow the same structure as the default pricing file. Mismatched structure prevents accurate cost calculations.
2. **Accessibility** - Ensure the pricing file path or URL is accessible during OpenLIT initialization to fetch the necessary pricing data.
3. **Updates** - When pricing changes:
   
   - **URL-based**: Update the file at the URL location
   - **Local file**: Update the file directly on your system
   - **Restart required**: Application restart may be needed to pick up pricing changes
4. **Fallback behavior** - If custom pricing fails to load, it falls back to default pricing where possible.
