This is documentation for the next version of Grafana Alloy Documentation. For the latest stable release, go to the latest version.

Documentationbreadcrumb arrow Grafana Alloybreadcrumb arrow Set upbreadcrumb arrow OpenTelemetry Engine
Open source

The Alloy OpenTelemetry Engine

You can run the OTel Engine using the CLI, Helm chart, or service installation.

Prerequisites

There are no additional prerequisites. The tools needed to run the OTel Engine are shipped within Alloy.

Before you start, validate your OpenTelemetry YAML configuration with the validate command:

Bash
alloy otel validate --config=<CONFIG_FILE>

While this is an experimental feature, it isn’t hidden behind an experimental feature flag like regular components. This maintains compatibility with the OpenTelemetry Collector.

Run with the CLI

The OTel Engine is available under the Alloy otel command. The CLI is the easiest way to experiment locally or on a single host. Refer to the OTel CLI documentation for more information.

The following example configuration file accepts telemetry over OTLP and sends it to the configured backend:

YAML
extensions:
  basicauth/my_auth:
    client_auth:
      username: <USERNAME>
      password: <PASSWORD>

receivers:
  otlp:
    protocols:
      grpc: {}
      http: {}

processors:
  batch:
    timeout: 1s
    send_batch_size: 512

exporters:
  otlphttp/my_backend:
    endpoint: <URL>
    auth:
      authenticator: basicauth/my_auth

service:
  extensions: [basicauth/my_auth]
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlphttp/my_backend]

Replace the following:

  • <USERNAME>: Your username. If you’re using Grafana Cloud, this is your Grafana Cloud instance ID.
  • <PASSWORD>: Your password. If you’re using Grafana Cloud, this is your Grafana Cloud API token.
  • <URL>: The URL to export data to. If you’re using Grafana Cloud, this is your Grafana Cloud OTLP endpoint URL.

For more information about where to find these values for Grafana Cloud, refer to Send data using OpenTelemetry Protocol.

To start the OTel Engine, run the following command:

shell
alloy otel --config=<CONFIG_FILE> [<FLAGS> ...]

Alloy then accepts incoming OTLP data on 0.0.0.0:4317 for gRPC and 0.0.0.0:4318 for HTTP requests. Metrics are also available on the default collector port and endpoint at 0.0.0.0:8888/metrics. Since the Default Engine isn’t running, the UI and metrics aren’t available at 0.0.0.0:12345/metrics.

Run the Alloy Engine extension

You can also run the OTel Engine with the Default Engine. Modify your YAML configuration to include the alloyengine extension. This extension accepts a path to the Default Engine configuration and starts a Default Engine pipeline alongside the OTel Engine pipeline.

The following example shows the configuration:

YAML
extensions:
  basicauth/my_auth:
    client_auth:
      username: <USERNAME>
      password: <PASSWORD>
  alloyengine:
    config:
      file: <ALLOY_CONFIG_PATH>
    flags:
      server.http.listen-addr: 0.0.0.0:12345

receivers:
  otlp:
    protocols:
      grpc: {}
      http: {}

processors:
  batch:
    timeout: 1s
    send_batch_size: 512

exporters:
  otlphttp/my_backend:
    endpoint: <URL>
    auth:
      authenticator: basicauth/my_auth

service:
  extensions: [basicauth/my_auth, alloyengine]
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlphttp/my_backend]

Replace the following:

  • <ALLOY_CONFIG_PATH>: The path to your Default Engine configuration file.
  • <USERNAME>: Your username. If you’re using Grafana Cloud, this is your Grafana Cloud instance ID.
  • <PASSWORD>: Your password. If you’re using Grafana Cloud, this is your Grafana Cloud API token.
  • <URL>: The URL to export data to. If you’re using Grafana Cloud, this is your Grafana Cloud OTLP endpoint URL.

This example adds the alloyengine block in the extension declarations and enables the extension in the service block. You can then run Alloy with the exact same command as before:

shell
alloy otel --config=<CONFIG_FILE> [<FLAGS> ...]

This command starts both the Default Engine and OTel Engine. The output of both engines is visible in the logs. You can access the Default Engine UI and metrics on port 12345.

Run with the OpenTelemetry Collector Helm chart

Use the upstream OpenTelemetry Collector Helm chart to run the OTel Engine. This approach delivers an identical upstream collector experience. It also ensures you get improvements, bug fixes, and security updates as they’re released.

The following example Helm values.yaml incorporates the same configuration seen above into a Kubernetes Deployment.

Note

In this configuration, binding port 8888 to 0.0.0.0 makes the metrics endpoint listen on all interfaces inside the Pod. This lets other Pods in the cluster reach it without using kubectl port-forward.

The configuration also sets the command.name key to bin/otelcol. This is the binary that runs the alloy otel sub-command. The Helm chart doesn’t expose custom commands, so this setting is necessary.

YAML
image:
  repository: grafana/alloy
  tag: latest

command: 
  name: "bin/otelcol"

mode: deployment

ports:
  metrics:
    enabled: true

alternateConfig:
  extensions:
    health_check:
      endpoint: 0.0.0.0:13133 # This is necessary for the k8s liveliness check
    basicauth/my_auth:
      client_auth:
        username: <USERNAME>
        password: <PASSWORD>

  receivers:
    otlp:
      protocols:
        grpc: {}
        http: {}

  processors:
    batch:
      timeout: 1s
      send_batch_size: 512

  exporters:
    otlphttp/my_backend:
      endpoint: <URL>
      auth:
        authenticator: basicauth/my_auth

  service:
    telemetry:
      metrics:
        readers:
          - pull:
              exporter:
                prometheus:
                  host: 0.0.0.0 
                  port: 8888
    extensions: [basicauth/my_auth, health_check]
    pipelines:
      traces:
        receivers: [otlp]
        processors: [batch]
        exporters: [otlphttp/my_backend]

Replace the following:

  • <USERNAME>: Your username. If you’re using Grafana Cloud, this is your Grafana Cloud instance ID.
  • <PASSWORD>: Your password. If you’re using Grafana Cloud, this is your Grafana Cloud API token.
  • <URL>: The URL to export data to. If you’re using Grafana Cloud, this is your Grafana Cloud OTLP endpoint URL.

The Helm chart ships with a default OpenTelemetry Collector configuration in the config field. The upstream Helm chart documentation describes this field. If you want to completely override that default configuration, use the alternateConfig field. In the example above, the alternateConfig field ensures the configuration matches the other examples in this document and doesn’t inherit any of the chart’s defaults. Alternatively, you can omit both config and alternateConfig to use the default configuration as-is. You can also provide your own config block that merges with the chart’s default configuration.

Refer to the upstream documentation for more information about how to configure the helm chart to work for your use case.

Run with service installation

Service installation support for systemd, launchd, and similar systems isn’t included in the initial experimental release. Service installers work seamlessly with the OTel Engine as the feature matures. In the meantime, use the CLI or Helm options for testing.

Considerations

  1. Storage configuration: The Default Engine accepts the --storage.path flag to set a base directory for components to store data on disk. The OTel Engine uses the filestorage extension instead of a CLI flag. Refer to the upstream documentation for more information.
  2. Server ports: The Default Engine exposes its HTTP server on port 12345. The OTel Engine exposes its HTTP server on port 8888. The OTel Engine HTTP server doesn’t expose a UI, support bundles, or reload endpoint functionality like the Default Engine does.
  3. Fleet management: Grafana Fleet Management doesn’t support the OTel Engine yet. You must define and manage the input configuration manually.

Next steps