Pattern 3 - Kubernetes sidecars and daemon sets
When your workloads run on Kubernetes, you can use the OpenTelemetry Collector as a second container in your pods, following the “sidecar” pattern. This way, all your telemetry data can be captured by one single sidecar and sent to different backends, or to a centralized collector.
For non multi-tenant setups, or in cases where a node has only workloads from a single tenant, daemon sets are also a viable alternative, with potentially a smaller runtime and maintenance overhead.
In any case, it’s common practice to have a namespace dedicated to your monitoring or observability tooling. We recommend keeping one deployment of the OpenTelemetry Collector with multiple replicas in such a centralized namespace, receiving telemetry data from your entire fleet of sidecars or daemon sets, so that a change in the general processing pipeline or telemetry backend can be applied to this layer instead of having to redeploy all sidecars or daemon sets.
Example of the configuration for the collector as a sidecar:
extensions:
receivers:
prometheus:
config:
scrape_configs:
- job_name: "my-service"
static_configs:
- targets: ["0.0.0.0:9090"]
otlp:
protocols:
grpc:
processors:
exporters:
otlp:
endpoint: my-otelcol.observability.svc.cluster.local:4317
service:
extensions: []
pipelines:
metrics:
receivers: [prometheus]
processors: []
exporters: [otlp]
traces:
receivers: [otlp]
processors: []
exporters: [otlp]
Example of a collector on a central namespace:
extensions:
basicauth/otlp:
client_auth:
username: "${OTLP_USER_ID}"
password: "${TOKEN}"
receivers:
otlp:
protocols:
grpc:
processors:
exporters:
otlphttp:
endpoint: https://otlp-gateway-prod-us-central-0.grafana.net/otlp
auth:
authenticator: basicauth/otlp
service:
extensions: [basicauth/otlp]
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [otlphttp]
metrics:
receivers: [otlp]
processors: []
exporters: [otlphttp]