---
title: "Configure signal correlation | Grafana Cloud documentation"
description: "Configure shared labels and attributes, trace context, and linking between telemetry signals."
---

> For a curated documentation index, see [llms.txt](/llms.txt). For the complete documentation index, see [llms-full.txt](/llms-full.txt).

# Configure signal correlation

Signal correlation enables you to navigate seamlessly between metrics, logs, traces, and profiles. This topic shows you how to configure the connections that make correlation possible.

## Before you begin

- You are ingesting data from at least two telemetry signals.
- You have access to your application instrumentation code.
- You can modify your Alloy or OpenTelemetry Collector configuration.

## Overview

Correlation works by connecting signals through:

- Shared labels and attributes: Common identifiers across all signals (service name, environment, and more). Metrics and logs use **labels**; traces use **attributes**.
- Trace context: Trace IDs embedded in logs and metrics.
- Data source configuration: Grafana settings that enable cross-signal navigation.

This guide covers the essential configuration for each correlation type.

The following diagram shows the configuration flow:

## Configure shared labels and attributes

Shared labels and attributes are the foundation of correlation. They allow Grafana to match data across different signal types.

### Choose your labels and attributes

Use consistent names and values across all signals. Metrics and logs use labels; traces use attributes. Essential identifiers:

- **service** or **service.name** or **service\_name** - The service or application name. For more information, refer to [Loki labels](/docs/loki/latest/get-started/labels/#default-labels-for-all-users).

<!--THE END-->

- **environment** or **env** - Deployment environment (`prod`, `staging`, `dev`).

<!--THE END-->

- **namespace** - Kubernetes namespace (if applicable).
- **cluster** - Cluster name (if applicable).

> Caution
> 
> Names must match exactly (case-sensitive) across all data sources. For traces, use [resource attributes](/docs/grafana-cloud/monitor-applications/application-observability/setup/resource-attributes/) (for example, `resource.service.name`).

### Configure correlation in Alloy

Add external labels to your remote write and export configurations:

> Note
> 
> Replace `<PROMETHEUS_USERNAME>`, `<LOKI_USERNAME>`, `<OTLP_USERNAME>`, and `<GRAFANA_CLOUD_API_KEY>` with your actual Grafana Cloud credentials. Find these in your Grafana Cloud stack’s connection details.

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
prometheus.remote_write "default" {
  endpoint {
    url = "https://prometheus-xxx.grafana.net/api/prom/push"

    basic_auth {
      username = "<PROMETHEUS_USERNAME>"
      password = "<GRAFANA_CLOUD_API_KEY>"
    }
  }

  external_labels = {
    cluster     = "production",
    environment = "prod",
  }
}

loki.write "default" {
  endpoint {
    url = "https://logs-xxx.grafana.net/loki/api/v1/push"

    basic_auth {
      username = "<LOKI_USERNAME>"
      password = "<GRAFANA_CLOUD_API_KEY>"
    }
  }

  external_labels = {
    cluster     = "production",
    environment = "prod",
  }
}

otelcol.exporter.otlp "default" {
  client {
    endpoint = "https://otlp-gateway-xxx.grafana.net/otlp"

    auth = otelcol.auth.basic.credentials.handler
  }
}

otelcol.auth.basic "credentials" {
  username = "<OTLP_USERNAME>"
  password = "<GRAFANA_CLOUD_API_KEY>"
}
```

### Configure correlation in OpenTelemetry Collector

Add resource attributes for traces and labels for metrics and logs. Use the OTLP HTTP exporter to send logs to the Loki native OTLP endpoint.

> Note
> 
> Replace `<PROMETHEUS_USERNAME>`, `<LOKI_USERNAME>`, and `<GRAFANA_CLOUD_API_KEY>` with your Grafana Cloud credentials. Find these in your Grafana Cloud stack’s connection details.

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
processors:
  resource:
    attributes:
      - key: environment
        value: prod
        action: upsert
      - key: cluster
        value: production
        action: upsert
      - key: service.name
        value: my-service
        action: upsert

exporters:
  prometheusremotewrite:
    endpoint: https://prometheus-xxx.grafana.net/api/prom/push
    headers:
      Authorization: Basic <BASE64_ENCODED_CREDENTIALS>
    external_labels:
      cluster: production
      environment: prod

  otlphttp/logs:
    endpoint: https://logs-xxx.grafana.net/otlp
    headers:
      Authorization: Basic <BASE64_ENCODED_CREDENTIALS>

  otlp/traces:
    endpoint: https://otlp-gateway-xxx.grafana.net:443
    headers:
      Authorization: Basic <BASE64_ENCODED_CREDENTIALS>

service:
  pipelines:
    logs:
      receivers: [otlp]
      processors: [resource]
      exporters: [otlphttp/logs]
    metrics:
      receivers: [otlp]
      processors: [resource]
      exporters: [prometheusremotewrite]
    traces:
      receivers: [otlp]
      processors: [resource]
      exporters: [otlp/traces]
```

The Loki native OTLP endpoint automatically converts OpenTelemetry resource attributes to Loki labels. The `service.name`, `environment`, and `cluster` resource attributes become queryable labels in Loki.

### Configure profiles labels

For profiles to correlate with other signals, ensure your Pyroscope instrumentation uses consistent labels:

Example for Go with Pyroscope:

Go ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```go
pyroscope.Start(pyroscope.Config{
    ApplicationName: "my-service",
    ServerAddress:   "https://profiles-xxx.grafana.net",
    Tags: map[string]string{
        "service_name": "my-service",  // Must match trace service.name
        "environment":  "prod",
        "cluster":      "production",
    },
})
```

> Important
> 
> Pyroscope uses `service_name` (with underscore) while traces use `service.name` (with dot). Configure the Tempo data source to map between these naming conventions.

### Verify shared labels and attributes

Check that labels and attributes appear consistently across all signals:

1. Query Prometheus: `up{cluster="production"}`
2. Query Loki: `{cluster="production"}`
3. Query Tempo: `{resource.cluster="production"}`
4. Query Pyroscope: `{cluster="production"}`

All queries should return data from the same service.

### Validate label configuration against limits

Before deploying, verify your label configuration stays within platform limits:

Expand table

| Check                    | Metrics limit           | Logs limit       | Action if exceeded                          |
|--------------------------|-------------------------|------------------|---------------------------------------------|
| Labels per series/stream | 40 max (30 recommended) | 15 max           | Reduce shared labels or move to log content |
| Label name length        | 1,024 characters        | 1,024 characters | Shorten label names                         |
| Label value length       | 2,048 characters        | 2,048 characters | Truncate or hash long values                |

> Tip
> 
> Design your shared labels to work within the most restrictive limit. Since logs allow only 15 labels per stream, limit your correlation labels to 10-12 to leave room for signal-specific labels.

## Configure trace context propagation

Trace context propagation embeds trace IDs in logs, enabling log-to-trace navigation.

### Automatic instrumentation (recommended)

Most OpenTelemetry SDKs automatically inject trace context into logs:

For Go:

Go ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```go
import (
    "go.opentelemetry.io/contrib/instrumentation/log/slog"
)

// Use OpenTelemetry-enabled logger
logger := slog.New(slog.NewJSONHandler(os.Stdout))
```

For Java:

java ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```java
// Add to logback.xml or log4j2.xml
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} trace_id=%X{trace_id} span_id=%X{span_id} - %msg%n</pattern>
```

For Python:

Python ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```python
from opentelemetry import trace
from opentelemetry.instrumentation.logging import LoggingInstrumentor

# Enable automatic trace context injection
LoggingInstrumentor().instrument()
```

### Manual instrumentation

If automatic injection isn’t available, manually add trace context to logs:

For Python:

Python ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```python
import logging
from opentelemetry import trace

logger = logging.getLogger(__name__)

def process_request():
    span = trace.get_current_span()
    span_context = span.get_span_context()

    logger.info(
        "Processing request",
        extra={
            "trace_id": format(span_context.trace_id, "032x"),
            "span_id": format(span_context.span_id, "016x"),
        }
    )
```

### Verify trace IDs in logs

Query your logs to confirm trace IDs appear:

logql ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```logql
{service="api"} | json | trace_id != ""
```

You see log entries with `trace_id` fields.

## Configure derived fields in Loki

Derived fields make trace IDs clickable in log entries, enabling one-click navigation to traces.

### Configure the Loki data source

1. In Grafana Cloud, go to **Connections** &gt; **Data sources**.
2. Select your Loki data source.
3. Scroll to **Derived fields**.
4. Click **Add**.
5. Configure the field to match your log format:

```
  For JSON logs:

  - **Name**: `traceId`
  - **Regex**: `"trace_id":\s*"(\w+)"`
  - **URL Label**: (leave empty for internal link)
  - **Internal link**: Toggle on
  - **Data source**: Select your Tempo data source
  - **Query**: `${__value.raw}`

  For structured text logs:

  - **Name**: `traceId`
  - **Regex**: `trace_id=(\w+)`
  - **Internal link**: Toggle on
  - **Data source**: Select your Tempo data source
  - **Query**: `${__value.raw}`
```

### Test derived fields

1. Query logs with trace IDs: `{service="api"} | json | trace_id != ""`
2. Look for underlined trace IDs in log entries.

<!--THE END-->

1. Click a trace ID to navigate to the trace.

If trace IDs aren’t clickable, verify your regular expression matches the log format exactly.

## Configure exemplars

Exemplars link metric data points to specific traces, enabling metrics-to-trace navigation.

### Before you begin

- Your application generates exemplars with trace IDs.
- Your application exports OpenMetrics format (required for Grafana Cloud).
- Metrics use histogram or summary types (counters and gauges don’t support exemplars).

> Note
> 
> **Exemplar limits**: Grafana Cloud allows up to 100,000 exemplars per user by default. Exemplars are stored separately from metrics and have their own retention. For details, refer to the [usage limits documentation](/docs/grafana-cloud/cost-management-and-billing/manage-invoices/understand-your-invoice/usage-limits/).

### Configure your application

Use Prometheus client libraries to emit exemplars:

For a Go application:

Go ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```go
import (
    "github.com/prometheus/client_golang/prometheus"
)

histogram := prometheus.NewHistogram(prometheus.HistogramOpts{
    Name: "http_request_duration_seconds",
    Help: "HTTP request duration",
})

// Record observation with exemplar
histogram.(prometheus.ExemplarObserver).ObserveWithExemplar(
    duration,
    prometheus.Labels{"trace_id": traceID},
)
```

For a Python application:

Python ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```python
from prometheus_client import Histogram

histogram = Histogram('http_request_duration_seconds', 'HTTP request duration')

# Record observation with exemplar
histogram.observe(duration, {'trace_id': trace_id})
```

### Verify exemplar generation

Check that your application exports exemplars:

Bash ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```bash
curl -H "Accept: application/openmetrics-text" http://your-app:9090/metrics | grep -i "traceid"
```

You see exemplar annotations with trace IDs.

### Configure Grafana Alloy

Enable exemplar forwarding in your Prometheus `remote_write`:

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
prometheus.remote_write "default" {
  endpoint {
    url            = "https://prometheus-xxx.grafana.net/api/prom/push"
    send_exemplars = true

    basic_auth {
      username = "<PROMETHEUS_USERNAME>"
      password = "<GRAFANA_CLOUD_API_KEY>"
    }
  }
}
```

### Enable exemplars in dashboards

1. Edit a dashboard panel using metrics with exemplars.
2. Ensure the panel type is **Time series** (not Graph).
3. In the panel, toggle **Exemplars** on in the legend.

Exemplars appear as small diamond points on the graph.

### Troubleshoot exemplars

Expand table

| Problem                   | Solution                                                            |
|---------------------------|---------------------------------------------------------------------|
| Exemplars not appearing   | Verify panel type is Time series and Exemplars toggle is enabled    |
| Exemplar links return 404 | Check data source configuration points to correct Tempo data source |
| No trace for exemplar     | Tail sampling may drop traces after exemplars are generated         |

## Configure traces to profiles

Link traces directly to profiles to see code-level performance for specific spans.

### Before you begin

- Your application is instrumented for both traces and profiles
- Spans include profile metadata (for span profiles) or use time-based correlation

### Configure span profiles (recommended)

Span profiles attach profiling data directly to trace spans.

**Go with Pyroscope:**

Go ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```go
import (
    "github.com/grafana/pyroscope-go"
)

// Enable span profiles
pyroscope.Start(pyroscope.Config{
    ApplicationName: "my-service",
    ServerAddress:   "https://profiles-xxx.grafana.net",

    ProfileTypes: []pyroscope.ProfileType{
        pyroscope.ProfileCPU,
        pyroscope.ProfileInuseObjects,
        pyroscope.ProfileAllocObjects,
    },

    // Enable span profiling
    EnableSpanProfiling: true,
})
```

Refer to [Span profiles documentation](/docs/pyroscope/latest/configure-client/trace-span-profiles/) for other languages.

### Configure the Tempo data source

Enable traces-to-profiles linking:

1. In Grafana Cloud, go to **Connections** &gt; **Data sources**.
2. Select your Tempo data source.
3. Scroll to **Traces to profiles**.
4. Toggle **Enable**.
5. Select your Pyroscope data source.
6. Configure profile type (for example, `process_cpu:cpu:nanoseconds:cpu:nanoseconds`)
7. Map span attributes to profile labels:
   
   - **Span attribute**: `service.name`
   - **Profile label**: `service_name`

### Test traces to profiles

1. Open a trace in Explore or Traces Drilldown.
2. Look for a **Profiles** link on spans.
3. Click the link to view the profile for that span.

If the link doesn’t appear, verify span profiles are being sent or check time-based correlation settings.

## Verify correlation is working

Use this checklist to confirm everything is configured.

### Shared labels and attributes

- Same label/attribute names across Prometheus, Loki, Tempo, and Pyroscope
- Label and attribute values match exactly (case-sensitive)
- Query each data source to verify labels and attributes exist

### Trace context

- Trace IDs appear in log entries
- Trace ID format is consistent (32-character hex)
- Logs from active traces contain valid trace IDs

### Derived fields

- Loki data source has derived field configured
- Regex pattern matches your log format
- Trace IDs are clickable in log view
- Clicking opens the correct trace in Tempo

### Exemplars

- Application exports exemplars with trace IDs
- Alloy/collector forwards exemplars
- Dashboard panels show exemplar points
- Clicking exemplars opens traces

### Traces to profiles

- Tempo data source configured for traces to profiles
- Label mappings are correct
- Profiles link appears on trace spans
- Clicking opens profile for correct time range

## Next steps

- [Why correlation matters](/docs/grafana-cloud/telemetry-signals/use-signals-together/why-correlation-matters/) - Understand the value of correlation
- [Navigate between signals](/docs/grafana-cloud/telemetry-signals/use-signals-together/navigation-between-signals/) - Master Grafana UI navigation
- [Troubleshoot signal correlation](/docs/grafana-cloud/telemetry-signals/use-signals-together/troubleshooting/) - Solve common issues
- [Exemplars configuration](/docs/grafana-cloud/send-data/traces/configure/exemplars/) - Detailed exemplars setup
- [Traces to profiles](/docs/grafana-cloud/monitor-applications/profiles/traces-to-profiles/) - Detailed profiles linking setup
