TraceQL workflows
Follow this guide to locate and analyze Tempo traces using natural language. Use it to debug latency spikes, dependency issues, or cascading failures.
What you’ll achieve
- Frame trace searches with the right services, attributes, and duration thresholds.
- Analyze slow spans and identify which services contribute most to latency.
- Share trace summaries or navigation links with teammates for deeper inspection.
Before you begin
- Tempo data source: Mention it with
@
in your prompt. - Trace attributes: Know which services, operation names, or tags you want to search.
Identify the trace target
Define the latency problem and the services involved so the Assistant can zero in on the spans that matter.
- Describe the latency issue and the relevant services. Provide example span names or attributes if you know them.
Query @tempo-traces for checkout-service traces longer than 2 seconds over the last 30 minutes.
- Ask for span breakdowns to see where time is spent.
Show the slowest spans and include their service names and durations.
Refine and analyze
Iterate on the TraceQL filters and ask for insights so you understand where time is spent.
- Request attribute filters to narrow results, for example,
Filter spans where http.status_code >= 500
. - Ask for bottleneck summaries, for example,
Summarize which downstream services contribute to the latency
. - Use follow-up prompts to pivot into logs or metrics for the same trace IDs.
- Correct inaccuracies politely, remind the Assistant of the service name or threshold you care about, and iterate.
Share findings
Capture the key spans and narratives, then distribute links so teammates can inspect the traces themselves.
- Ask the Assistant to provide a narrative you can paste into an incident update.
- Request navigation links to the trace viewer for deeper inspection.
Share the TraceQL outcome
You now have TraceQL queries, supporting summaries, and direct links to the trace viewer. Use them to brief responders or feed investigations and dashboards.
Troubleshooting
- Missing traces: confirm the service emits spans to Tempo and that you selected the right data source.
- Query too broad: specify service name, operation, or duration thresholds.
- Need correlation: ask to cross-reference logs or metrics for the same trace IDs.
- Switching investigations: open a new conversation when you move from one incident to another so the trace context resets.
Next steps
- Learn SQL workflows to query business data alongside observability signals.
- Correlate traces with other data using Data analysis techniques.