This is documentation for the next version of Grafana Alloy Documentation. For the latest stable release, go to the latest version.
Get Started with the Alloy OpenTelemetry Engine
You can run the OTel Engine using the CLI, Helm chart, or service installation.
Prerequisites
There are no additional prerequisites. The tools needed to run the OTel Engine are shipped within Alloy.
Before you start, validate your OpenTelemetry YAML configuration with the validate command:
./build/alloy otel validate --config=<CONFIG_FILE>Whilst this is an experimental feature, it is not hidden behind an experimental feature flag like regular components are to keep compatibility with the OpenTelemetry Collector.
Run with the CLI
The OTel Engine is available under the Alloy otel command.
The CLI is the easiest way to experiment locally or on a single host.
Refer to the OTel CLI documentation for more information.
The following example configuration file accepts telemetry over OTLP and sends it to the configured backend:
extensions:
basicauth/my_auth:
client_auth:
username: <USERNAME>
password: <PASSWORD>
receivers:
otlp:
protocols:
grpc: {}
http: {}
processors:
batch:
timeout: 1s
send_batch_size: 512
exporters:
otlphttp/my_backend:
endpoint: <URL>
auth:
authenticator: basicauth/my_auth
service:
extensions: [basicauth/my_auth]
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/my_backend]Replace the following:
<USERNAME>: Your username, if you are using Grafana Cloud this will be your Grafana Cloud instance ID.<PASSWORD>: Your password, if you are using Grafana Cloud this will be your Grafana Cloud API token.<URL>: The URL to export data to, if you are using Grafana Cloud this will be your Grafana Cloud OTLP endpoint URL.
For more information about where to find these values for Grafana Cloud, refer to Send data using OpenTelemetry Protocol.
To start the OTel Engine, run the following command:
alloy otel --config=<CONFIG_FILE> [<FLAGS> ...]Alloy then accepts incoming OTLP data on 0.0.0.0:4317 for gRPC and 0.0.0.0:4318 for HTTP requests.
Metrics are also available on the default collector port and endpoint at 0.0.0.0:8888/metrics.
Since the Default Engine isn’t running, the UI and metrics aren’t available at 0.0.0.0:12345/metrics.
Run the Alloy Engine extension
You can also run the OTel Engine with the Default Engine.
Modify your YAML configuration to include the alloyengine extension, which accepts a path to the Default Engine configuration and starts a Default Engine pipeline alongside the OTel Engine pipeline.
The following example shows the configuration:
extensions:
basicauth/my_auth:
client_auth:
username: <USERNAME>
password: <PASSWORD>
alloyengine:
config:
file: <ALLOY_CONFIG_PATH>
flags:
server.http.listen-addr: 0.0.0.0:12345
receivers:
otlp:
protocols:
grpc: {}
http: {}
processors:
batch:
timeout: 1s
send_batch_size: 512
exporters:
otlphttp/my_backend:
endpoint: <URL>
auth:
authenticator: basicauth/my_auth
service:
extensions: [basicauth/my_auth, alloyengine]
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/my_backend]Replace the following:
<ALLOY_CONFIG_PATH>: The path to your Default Engine configuration file.<USERNAME>: Your username, if you are using Grafana Cloud this will be your Grafana Cloud instance ID.<PASSWORD>: Your password, if you are using Grafana Cloud this will be your Grafana Cloud API token.<URL>: The URL to export data to, if you are using Grafana Cloud this will be your Grafana Cloud OTLP endpoint URL.
This example adds the alloyengine block in the extension declarations and enables the extension in the service block.
You can then run Alloy with the exact same command as before:
alloy otel --config=<CONFIG_FILE> [<FLAGS> ...]This starts both the Default Engine and OTel Engine.
The output of both engines is visible in the logs.
You can access the Default Engine UI and metrics on port 12345.
Run with The OpenTelemetry Collector Helm chart
Use the upstream OpenTelemetry Collector Helm chart run the OTel Engine . This delivers an identical upstream collector experience and ensures you get improvements, bug fixes, and security updates as they are released.
The following example Helm values.yaml incorporates the same configuration seen above into a Kubernetes deployment.
Note
In this configuration, binding port
8888to0.0.0.0makes the metrics endpoint listen on all interfaces inside the Pod, so other Pods in the cluster can reach it without usingkubectl port-forward.The configuration also sets the
command.namekey tobin/otelcol. This is the binary that runs thealloy otelsubcommand. The Helm chart doesn’t expose custom commands, so this setting is necessary.
image:
repository: grafana/alloy
tag: latest
command:
name: "bin/otelcol"
mode: deployment
ports:
metrics:
enabled: true
alternateConfig:
extensions:
health_check:
endpoint: 0.0.0.0:13133 # This is necessary for the k8s liveliness check
basicauth/my_auth:
client_auth:
username: <USERNAME>
password: <PASSWORD>
receivers:
otlp:
protocols:
grpc: {}
http: {}
processors:
batch:
timeout: 1s
send_batch_size: 512
exporters:
otlphttp/my_backend:
endpoint: <URL>
auth:
authenticator: basicauth/my_auth
service:
telemetry:
metrics:
readers:
- pull:
exporter:
prometheus:
host: 0.0.0.0
port: 8888
extensions: [basicauth/my_auth, health_check]
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/my_backend]Replace the following:
<USERNAME>: Your username. If you are using Grafana Cloud this is your Grafana Cloud instance ID.<PASSWORD>: Your password. If you are using Grafana Cloud this is your Grafana Cloud API token.<URL>: The URL to export data to. If you are using Grafana Cloud this is your Grafana Cloud OTLP endpoint URL.
The Helm chart ships with a default OpenTelemetry Collector configuration in the config field, which is described in the upstream Helm chart documentation.
If you want to completely override that default configuration, you can use the alternateConfig field.
In the example above, alternateConfig field is used to ensure the configuration matches the other examples in this Getting Started document and does not inherit any of the chart’s defaults.
Alternatively, you can omit both config and alternateConfig to use the default configuration as-is, or provide your own config block that will be merged with the chart’s default configuration.
Refer to the upstream documentation for more information about how to configure the helm chart to work for your use case.
Run with service installation
Service installation support for systemd, launchd, and similar systems isn’t included in the initial experimental release. Service installers will work seamlessly with the OTel Engine as the feature matures. In the meantime, use the CLI or Helm options for testing.
Considerations
- Storage configuration: The Default Engine accepts the
--storage.pathflag to set a base directory for components to store data on disk. The OTel Engine uses thefilestorageextension instead of a CLI flag. Refer to the upstream documentation for more information. - Server ports: The Default Engine exposes its HTTP server on port
12345. The OTel Engine exposes its HTTP server on port8888. The OTel Engine HTTP server doesn’t expose a UI, support bundles, or reload endpoint functionality like the Default Engine. - Fleet management: Grafana Fleet Management isn’t supported yet for the OTel Engine. You must define and manage the input configuration manually.



