Configure signal correlation
Signal correlation enables you to navigate seamlessly between metrics, logs, traces, and profiles. This topic shows you how to configure the connections that make correlation possible.
Before you begin
- You are ingesting data from at least two telemetry signals.
- You have access to your application instrumentation code.
- You can modify your Alloy or OpenTelemetry Collector configuration.
Overview
Correlation works by connecting signals through:
- Shared labels and attributes: Common identifiers across all signals (service name, environment, and more). Metrics and logs use labels; traces use attributes.
- Trace context: Trace IDs embedded in logs and metrics.
- Data source configuration: Grafana settings that enable cross-signal navigation.
This guide covers the essential configuration for each correlation type.
The following diagram shows the configuration flow:
Configure shared labels and attributes
Shared labels and attributes are the foundation of correlation. They allow Grafana to match data across different signal types.
Choose your labels and attributes
Use consistent names and values across all signals. Metrics and logs use labels; traces use attributes. Essential identifiers:
- service or service.name or service_name - The service or application name. For more information, refer to Loki labels.
- environment or env - Deployment environment (
prod,staging,dev).
- namespace - Kubernetes namespace (if applicable).
- cluster - Cluster name (if applicable).
Caution
Names must match exactly (case-sensitive) across all data sources. For traces, use resource attributes (for example,
resource.service.name).
Configure correlation in Alloy
Add external labels to your remote write and export configurations:
Note
Replace
<PROMETHEUS_USERNAME>,<LOKI_USERNAME>,<OTLP_USERNAME>, and<GRAFANA_CLOUD_API_KEY>with your actual Grafana Cloud credentials. Find these in your Grafana Cloud stack’s connection details.
prometheus.remote_write "default" {
endpoint {
url = "https://prometheus-xxx.grafana.net/api/prom/push"
basic_auth {
username = "<PROMETHEUS_USERNAME>"
password = "<GRAFANA_CLOUD_API_KEY>"
}
}
external_labels = {
cluster = "production",
environment = "prod",
}
}
loki.write "default" {
endpoint {
url = "https://logs-xxx.grafana.net/loki/api/v1/push"
basic_auth {
username = "<LOKI_USERNAME>"
password = "<GRAFANA_CLOUD_API_KEY>"
}
}
external_labels = {
cluster = "production",
environment = "prod",
}
}
otelcol.exporter.otlp "default" {
client {
endpoint = "https://otlp-gateway-xxx.grafana.net/otlp"
auth = otelcol.auth.basic.credentials.handler
}
}
otelcol.auth.basic "credentials" {
username = "<OTLP_USERNAME>"
password = "<GRAFANA_CLOUD_API_KEY>"
}Configure correlation in OpenTelemetry Collector
Add resource attributes for traces and labels for metrics and logs. Use the OTLP HTTP exporter to send logs to the Loki native OTLP endpoint.
Note
Replace
<PROMETHEUS_USERNAME>,<LOKI_USERNAME>, and<GRAFANA_CLOUD_API_KEY>with your Grafana Cloud credentials. Find these in your Grafana Cloud stack’s connection details.
processors:
resource:
attributes:
- key: environment
value: prod
action: upsert
- key: cluster
value: production
action: upsert
- key: service.name
value: my-service
action: upsert
exporters:
prometheusremotewrite:
endpoint: https://prometheus-xxx.grafana.net/api/prom/push
headers:
Authorization: Basic <BASE64_ENCODED_CREDENTIALS>
external_labels:
cluster: production
environment: prod
otlphttp/logs:
endpoint: https://logs-xxx.grafana.net/otlp
headers:
Authorization: Basic <BASE64_ENCODED_CREDENTIALS>
otlp/traces:
endpoint: https://otlp-gateway-xxx.grafana.net:443
headers:
Authorization: Basic <BASE64_ENCODED_CREDENTIALS>
service:
pipelines:
logs:
receivers: [otlp]
processors: [resource]
exporters: [otlphttp/logs]
metrics:
receivers: [otlp]
processors: [resource]
exporters: [prometheusremotewrite]
traces:
receivers: [otlp]
processors: [resource]
exporters: [otlp/traces]The Loki native OTLP endpoint automatically converts OpenTelemetry resource attributes to Loki labels. The service.name, environment, and cluster resource attributes become queryable labels in Loki.
Configure profiles labels
For profiles to correlate with other signals, ensure your Pyroscope instrumentation uses consistent labels:
Example for Go with Pyroscope:
pyroscope.Start(pyroscope.Config{
ApplicationName: "my-service",
ServerAddress: "https://profiles-xxx.grafana.net",
Tags: map[string]string{
"service_name": "my-service", // Must match trace service.name
"environment": "prod",
"cluster": "production",
},
})Important
Pyroscope uses
service_name(with underscore) while traces useservice.name(with dot). Configure the Tempo data source to map between these naming conventions.
Verify shared labels and attributes
Check that labels and attributes appear consistently across all signals:
- Query Prometheus:
up{cluster="production"} - Query Loki:
{cluster="production"} - Query Tempo:
{resource.cluster="production"} - Query Pyroscope:
{cluster="production"}
All queries should return data from the same service.
Validate label configuration against limits
Before deploying, verify your label configuration stays within platform limits:
Tip
Design your shared labels to work within the most restrictive limit. Since logs allow only 15 labels per stream, limit your correlation labels to 10-12 to leave room for signal-specific labels.
Configure trace context propagation
Trace context propagation embeds trace IDs in logs, enabling log-to-trace navigation.
Automatic instrumentation (recommended)
Most OpenTelemetry SDKs automatically inject trace context into logs:
For Go:
import (
"go.opentelemetry.io/contrib/instrumentation/log/slog"
)
// Use OpenTelemetry-enabled logger
logger := slog.New(slog.NewJSONHandler(os.Stdout))For Java:
// Add to logback.xml or log4j2.xml
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} trace_id=%X{trace_id} span_id=%X{span_id} - %msg%n</pattern>For Python:
from opentelemetry import trace
from opentelemetry.instrumentation.logging import LoggingInstrumentor
# Enable automatic trace context injection
LoggingInstrumentor().instrument()Manual instrumentation
If automatic injection isn’t available, manually add trace context to logs:
For Python:
import logging
from opentelemetry import trace
logger = logging.getLogger(__name__)
def process_request():
span = trace.get_current_span()
span_context = span.get_span_context()
logger.info(
"Processing request",
extra={
"trace_id": format(span_context.trace_id, "032x"),
"span_id": format(span_context.span_id, "016x"),
}
)Verify trace IDs in logs
Query your logs to confirm trace IDs appear:
{service="api"} | json | trace_id != ""You see log entries with trace_id fields.
Configure derived fields in Loki
Derived fields make trace IDs clickable in log entries, enabling one-click navigation to traces.
Configure the Loki data source
- In Grafana Cloud, go to Connections > Data sources.
- Select your Loki data source.
- Scroll to Derived fields.
- Click Add.
- Configure the field to match your log format:
For JSON logs:
- **Name**: `traceId`
- **Regex**: `"trace_id":\s*"(\w+)"`
- **URL Label**: (leave empty for internal link)
- **Internal link**: Toggle on
- **Data source**: Select your Tempo data source
- **Query**: `${__value.raw}`
For structured text logs:
- **Name**: `traceId`
- **Regex**: `trace_id=(\w+)`
- **Internal link**: Toggle on
- **Data source**: Select your Tempo data source
- **Query**: `${__value.raw}`
Test derived fields
- Query logs with trace IDs:
{service="api"} | json | trace_id != "" - Look for underlined trace IDs in log entries.
- Click a trace ID to navigate to the trace.
If trace IDs aren’t clickable, verify your regular expression matches the log format exactly.
Configure exemplars
Exemplars link metric data points to specific traces, enabling metrics-to-trace navigation.
Before you begin
- Your application generates exemplars with trace IDs.
- Your application exports OpenMetrics format (required for Grafana Cloud).
- Metrics use histogram or summary types (counters and gauges don’t support exemplars).
Note
Exemplar limits: Grafana Cloud allows up to 100,000 exemplars per user by default. Exemplars are stored separately from metrics and have their own retention. For details, refer to the usage limits documentation.
Configure your application
Use Prometheus client libraries to emit exemplars:
For a Go application:
import (
"github.com/prometheus/client_golang/prometheus"
)
histogram := prometheus.NewHistogram(prometheus.HistogramOpts{
Name: "http_request_duration_seconds",
Help: "HTTP request duration",
})
// Record observation with exemplar
histogram.(prometheus.ExemplarObserver).ObserveWithExemplar(
duration,
prometheus.Labels{"trace_id": traceID},
)For a Python application:
from prometheus_client import Histogram
histogram = Histogram('http_request_duration_seconds', 'HTTP request duration')
# Record observation with exemplar
histogram.observe(duration, {'trace_id': trace_id})Verify exemplar generation
Check that your application exports exemplars:
curl -H "Accept: application/openmetrics-text" http://your-app:9090/metrics | grep -i "traceid"You see exemplar annotations with trace IDs.
Configure Grafana Alloy
Enable exemplar forwarding in your Prometheus remote_write:
prometheus.remote_write "default" {
endpoint {
url = "https://prometheus-xxx.grafana.net/api/prom/push"
send_exemplars = true
basic_auth {
username = "<PROMETHEUS_USERNAME>"
password = "<GRAFANA_CLOUD_API_KEY>"
}
}
}Enable exemplars in dashboards
- Edit a dashboard panel using metrics with exemplars.
- Ensure the panel type is Time series (not Graph).
- In the panel, toggle Exemplars on in the legend.
Exemplars appear as small diamond points on the graph.
Troubleshoot exemplars
Configure traces to profiles
Link traces directly to profiles to see code-level performance for specific spans.
Before you begin
- Your application is instrumented for both traces and profiles
- Spans include profile metadata (for span profiles) or use time-based correlation
Configure span profiles (recommended)
Span profiles attach profiling data directly to trace spans.
Go with Pyroscope:
import (
"github.com/grafana/pyroscope-go"
)
// Enable span profiles
pyroscope.Start(pyroscope.Config{
ApplicationName: "my-service",
ServerAddress: "https://profiles-xxx.grafana.net",
ProfileTypes: []pyroscope.ProfileType{
pyroscope.ProfileCPU,
pyroscope.ProfileInuseObjects,
pyroscope.ProfileAllocObjects,
},
// Enable span profiling
EnableSpanProfiling: true,
})Refer to Span profiles documentation for other languages.
Configure the Tempo data source
Enable traces-to-profiles linking:
- In Grafana Cloud, go to Connections > Data sources.
- Select your Tempo data source.
- Scroll to Traces to profiles.
- Toggle Enable.
- Select your Pyroscope data source.
- Configure profile type (for example,
process_cpu:cpu:nanoseconds:cpu:nanoseconds) - Map span attributes to profile labels:
- Span attribute:
service.name - Profile label:
service_name
- Span attribute:
Test traces to profiles
- Open a trace in Explore or Traces Drilldown.
- Look for a Profiles link on spans.
- Click the link to view the profile for that span.
If the link doesn’t appear, verify span profiles are being sent or check time-based correlation settings.
Verify correlation is working
Use this checklist to confirm everything is configured.
Shared labels and attributes
- Same label/attribute names across Prometheus, Loki, Tempo, and Pyroscope
- Label and attribute values match exactly (case-sensitive)
- Query each data source to verify labels and attributes exist
Trace context
- Trace IDs appear in log entries
- Trace ID format is consistent (32-character hex)
- Logs from active traces contain valid trace IDs
Derived fields
- Loki data source has derived field configured
- Regex pattern matches your log format
- Trace IDs are clickable in log view
- Clicking opens the correct trace in Tempo
Exemplars
- Application exports exemplars with trace IDs
- Alloy/collector forwards exemplars
- Dashboard panels show exemplar points
- Clicking exemplars opens traces
Traces to profiles
- Tempo data source configured for traces to profiles
- Label mappings are correct
- Profiles link appears on trace spans
- Clicking opens profile for correct time range
Next steps
- Why correlation matters - Understand the value of correlation
- Navigate between signals - Master Grafana UI navigation
- Troubleshoot signal correlation - Solve common issues
- Exemplars configuration - Detailed exemplars setup
- Traces to profiles - Detailed profiles linking setup



