Capture Kubernetes logs with OpenTelemetry Collector
Grafana Labs recommends sending OpenTelemetry logs with the OTLP protocol. However, some use cases require outputting logs to files or stdout:
- The OpenTelemetry SDK for Go, Python, Ruby, JavaScript, and PHP does not provide a stable OTLP implementation for logs.
- Organizational constraints, often related to reliability practices, require using files for logs.
You can still collect file-based logs with the OpenTelemetry Collector. This guide shows how to capture logs emitted through Kubernetes stdout, and you can apply the same pattern to logs written to files.
Architecture
To correlate traces and metrics with logs, enrich logs with the same resource attributes and trace and span IDs.
First, add identifying resource attributes to logs, such as service.name
, service.namespace
, service.instance.id
, and deployment.environment
, along with trace_id
and span_id
.
Then, use the same metadata enrichment pipeline in the OpenTelemetry Collector, such as the Kubernetes Attributes Processor or the Resource Detection Processor.
If you already get this enrichment when exporting logs through OTLP, add these attributes to log lines collected from files or stdout.

To carry over resource attributes in log lines, use one of the following export patterns.
Export unstructured logs:
Export unstructured logs and parse them with regular expressions, for example:
2024-09-17T11:29:54 INFO [nio-8080-exec-1] c.e.OrderController : Order completed - service.name=order-processor, service.instance.id=i-123456, span_id=1d5f8ca3f9366fac...
Export structured logs:
Export structured format logs like JSON logs and parse them with native parsers of the chosen format, for example:
{"timestamp": "2024-09-17T11:29:54", "level": "INFO", "body":"Order completed", "logger": "c.e.OrderController", "service_name": "order-processor", "service_instance_id": "i-123456", "span_id":"1d5f8ca3f9366fac"...}
Both export patterns have advantages and disadvantages: