Menu
OpenTelemetry OpenTelemetry Collector Use Cases Pattern 2 - Telemetry data normalization
Open source

Pattern 2 - Telemetry data normalization

Frequently, applications have been instrumented for metrics, logs, and traces at different stages. This means that they are not always labeled or tagged consistently. Having an OpenTelemetry Collector instance close to the workload allows an SRE to implement rules, ensuring that the telemetry data generated by the application follows a specific pattern or includes all the required basic information, such as pod name, pod namespace, cluster region, among others.

Example configuration:

yaml
extensions:
  basicauth/traces:
    client_auth:
      username: "${TRACES_USER_ID}"
      password: "${TOKEN}"
  basicauth/metrics:
    client_auth:
      username: "${METRICS_USER_ID}"
      password: "${TOKEN}"

receivers:
  prometheus:
    config:
      scrape_configs:
        - job_name: "my-service"
          static_configs:
            - targets: ["0.0.0.0:9090"]

  otlp:
    protocols:
      grpc:

processors:
  attributes:
    actions:
      - key: cluster
        value: eu-west-1
        action: upsert

  metricstransform:
    transforms:
      - include: otelcol_process_uptime
        action: update
        operations:
          - action: add_label
            new_label: cluster
            new_value: eu-west-1

exporters:
  otlp/tempo:
    endpoint: tempo-us-central1.grafana.net:443
    auth:
      authenticator: basicauth/traces

  prometheusremotewrite:
    endpoint: https://prometheus-blocks-prod-us-central1.grafana.net/api/prom/push
    auth:
      authenticator: basicauth/metrics

service:
  extensions: [basicauth/traces, basicauth/metrics]
  pipelines:
    metrics:
      receivers: [otlp, prometheus]
      processors: [metricstransform]
      exporters: [prometheusremotewrite]
    traces:
      receivers: [otlp]
      processors: [attributes]
      exporters: [otlp/tempo]