This is documentation for the next version of Grafana Alloy Documentation. For the latest stable release, go to the latest version.
The Alloy OpenTelemetry Engine
You can run the OTel Engine using the CLI, Helm chart, or service installation.
Prerequisites
There are no additional prerequisites. The tools needed to run the OTel Engine are shipped within Alloy.
Before you start, validate your OpenTelemetry YAML configuration with the validate command:
alloy otel validate --config=<CONFIG_FILE>While this is an experimental feature, it isn’t hidden behind an experimental feature flag like regular components.
This maintains compatibility with the OpenTelemetry Collector.
Run with the CLI
The OTel Engine is available under the Alloy otel command.
The CLI is the easiest way to experiment locally or on a single host.
Refer to the OTel CLI documentation for more information.
The following example configuration file accepts telemetry over OTLP and sends it to the configured backend:
extensions:
basicauth/my_auth:
client_auth:
username: <USERNAME>
password: <PASSWORD>
receivers:
otlp:
protocols:
grpc: {}
http: {}
processors:
batch:
timeout: 1s
send_batch_size: 512
exporters:
otlphttp/my_backend:
endpoint: <URL>
auth:
authenticator: basicauth/my_auth
service:
extensions: [basicauth/my_auth]
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/my_backend]Replace the following:
<USERNAME>: Your username. If you’re using Grafana Cloud, this is your Grafana Cloud instance ID.<PASSWORD>: Your password. If you’re using Grafana Cloud, this is your Grafana Cloud API token.<URL>: The URL to export data to. If you’re using Grafana Cloud, this is your Grafana Cloud OTLP endpoint URL.
For more information about where to find these values for Grafana Cloud, refer to Send data using OpenTelemetry Protocol.
To start the OTel Engine, run the following command:
alloy otel --config=<CONFIG_FILE> [<FLAGS> ...]Alloy then accepts incoming OTLP data on 0.0.0.0:4317 for gRPC and 0.0.0.0:4318 for HTTP requests.
Metrics are also available on the default collector port and endpoint at 0.0.0.0:8888/metrics.
Since the Default Engine isn’t running, the UI and metrics aren’t available at 0.0.0.0:12345/metrics.
Run the Alloy Engine extension
You can also run the OTel Engine with the Default Engine.
Modify your YAML configuration to include the alloyengine extension.
This extension accepts a path to the Default Engine configuration and starts a Default Engine pipeline alongside the OTel Engine pipeline.
The following example shows the configuration:
extensions:
basicauth/my_auth:
client_auth:
username: <USERNAME>
password: <PASSWORD>
alloyengine:
config:
file: <ALLOY_CONFIG_PATH>
flags:
server.http.listen-addr: 0.0.0.0:12345
receivers:
otlp:
protocols:
grpc: {}
http: {}
processors:
batch:
timeout: 1s
send_batch_size: 512
exporters:
otlphttp/my_backend:
endpoint: <URL>
auth:
authenticator: basicauth/my_auth
service:
extensions: [basicauth/my_auth, alloyengine]
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/my_backend]Replace the following:
<ALLOY_CONFIG_PATH>: The path to your Default Engine configuration file.<USERNAME>: Your username. If you’re using Grafana Cloud, this is your Grafana Cloud instance ID.<PASSWORD>: Your password. If you’re using Grafana Cloud, this is your Grafana Cloud API token.<URL>: The URL to export data to. If you’re using Grafana Cloud, this is your Grafana Cloud OTLP endpoint URL.
This example adds the alloyengine block in the extension declarations and enables the extension in the service block.
You can then run Alloy with the exact same command as before:
alloy otel --config=<CONFIG_FILE> [<FLAGS> ...]This command starts both the Default Engine and OTel Engine.
The output of both engines is visible in the logs.
You can access the Default Engine UI and metrics on port 12345.
Run with the OpenTelemetry Collector Helm chart
Use the upstream OpenTelemetry Collector Helm chart to run the OTel Engine. This approach delivers an identical upstream collector experience. It also ensures you get improvements, bug fixes, and security updates as they’re released.
The following example Helm values.yaml incorporates the same configuration seen above into a Kubernetes Deployment.
Note
In this configuration, binding port
8888to0.0.0.0makes the metrics endpoint listen on all interfaces inside the Pod. This lets other Pods in the cluster reach it without usingkubectl port-forward.The configuration also sets the
command.namekey tobin/otelcol. This is the binary that runs thealloy otelsub-command. The Helm chart doesn’t expose custom commands, so this setting is necessary.
image:
repository: grafana/alloy
tag: latest
command:
name: "bin/otelcol"
mode: deployment
ports:
metrics:
enabled: true
alternateConfig:
extensions:
health_check:
endpoint: 0.0.0.0:13133 # This is necessary for the k8s liveliness check
basicauth/my_auth:
client_auth:
username: <USERNAME>
password: <PASSWORD>
receivers:
otlp:
protocols:
grpc: {}
http: {}
processors:
batch:
timeout: 1s
send_batch_size: 512
exporters:
otlphttp/my_backend:
endpoint: <URL>
auth:
authenticator: basicauth/my_auth
service:
telemetry:
metrics:
readers:
- pull:
exporter:
prometheus:
host: 0.0.0.0
port: 8888
extensions: [basicauth/my_auth, health_check]
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/my_backend]Replace the following:
<USERNAME>: Your username. If you’re using Grafana Cloud, this is your Grafana Cloud instance ID.<PASSWORD>: Your password. If you’re using Grafana Cloud, this is your Grafana Cloud API token.<URL>: The URL to export data to. If you’re using Grafana Cloud, this is your Grafana Cloud OTLP endpoint URL.
The Helm chart ships with a default OpenTelemetry Collector configuration in the config field.
The upstream Helm chart documentation describes this field.
If you want to completely override that default configuration, use the alternateConfig field.
In the example above, the alternateConfig field ensures the configuration matches the other examples in this document and doesn’t inherit any of the chart’s defaults.
Alternatively, you can omit both config and alternateConfig to use the default configuration as-is.
You can also provide your own config block that merges with the chart’s default configuration.
Refer to the upstream documentation for more information about how to configure the helm chart to work for your use case.
Run with service installation
Service installation support for systemd, launchd, and similar systems isn’t included in the initial experimental release. Service installers work seamlessly with the OTel Engine as the feature matures. In the meantime, use the CLI or Helm options for testing.
Considerations
- Storage configuration: The Default Engine accepts the
--storage.pathflag to set a base directory for components to store data on disk. The OTel Engine uses thefilestorageextension instead of a CLI flag. Refer to the upstream documentation for more information. - Server ports: The Default Engine exposes its HTTP server on port
12345. The OTel Engine exposes its HTTP server on port8888. The OTel Engine HTTP server doesn’t expose a UI, support bundles, or reload endpoint functionality like the Default Engine does. - Fleet management: Grafana Fleet Management doesn’t support the OTel Engine yet. You must define and manage the input configuration manually.
Next steps
- Refer to OpenTelemetry in Alloy for information about the included components.
- Refer to the OTel CLI reference for more information about the OTel CLI.


