What you get
| Feature | Description |
|---|---|
| Grafana Cloud Traces | Store and query distributed traces |
| Tempo | High-scale trace backend |
| Trace Explorer | Search and visualize traces |
| Span details | See exactly what happened at each step |
| Trace-to-logs | Correlate trace spans with log entries |
| Flexible sampling | Full-fidelity or tail sampling to balance coverage and cost |
| Grafana Assistant | Investigate traces using natural language |
Questions answered
| With distributed tracing, you can answer… |
|---|
| How did this request flow through all my microservices? |
| Where did this request spend the most time? |
| Which service call failed and caused the error? |
| What database query was slow for this specific request? |
| What did downstream services return? |
Problems solved
| Problem | Solution |
|---|---|
| “It’s slow” but don’t know where | Traces show time spent at each step. |
| Errors happen but can’t find the cause | Span details show exact error message. |
| Sampling misses important traces | Tail sampling keeps errors/slow requests; full-fidelity when needed. |
| Can’t connect trace to logs | Trace-to-logs correlation |
Example: Tracing a slow request
Script
Distributed tracing is the foundation of Level 3. It lets you follow a single request as it flows through your entire system. Every microservice, every database call, every external API.
In Grafana Cloud, traces are stored in Tempo, which is designed to handle massive scale at low cost. You search and explore traces using Trace Explorer, and you can click from any trace span directly to the relevant log lines.
You control how much you capture. Need every trace for debugging critical paths? You can do full-fidelity tracing. Want to optimize costs? Use tail sampling to keep errors, slow requests, and a baseline sample. The flexibility is yours.
Imagine a request that takes 1200 milliseconds. The trace shows you that 800 of those milliseconds were spent in a single database query. Without tracing, you’d be guessing. With tracing, you know exactly where to optimize.
And if you don’t want to write TraceQL queries yourself, Grafana Assistant can help. Ask in natural language: “Find the service causing the highest latency” or “Show me traces with errors in the checkout service.” The assistant generates the queries for you.

