otelcol.exporter.otlp
otelcol.exporter.otlp accepts telemetry data from other otelcol components and writes them over the network using the OTLP gRPC protocol.
Note
otelcol.exporter.otlpis a wrapper over the upstream OpenTelemetry Collectorotlpexporter. Bug reports or feature requests will be redirected to the upstream repository, if necessary.
You can specify multiple otelcol.exporter.otlp components by giving them different labels.
Usage
otelcol.exporter.otlp "<LABEL>" {
client {
endpoint = "<HOST>:<PORT>"
}
}Arguments
You can use the following argument with otelcol.exporter.otlp:
| Name | Type | Description | Default | Required |
|---|---|---|---|---|
timeout | duration | Time to wait before marking a request as failed. | "5s" | no |
Blocks
You can use the following blocks with otelcol.exporter.otlp:
| Block | Description | Required |
|---|---|---|
client | Configures the gRPC client to send telemetry data to. | yes |
client > keepalive | Configures keepalive settings for the gRPC client. | no |
client > tls | Configures TLS for the gRPC client. | no |
client > tls > tpm | Configures TPM settings for the TLS key_file. | no |
debug_metrics | Configures the metrics that this component generates to monitor its state. | no |
retry_on_failure | Configures retry mechanism for failed requests. | no |
sending_queue | Configures batching of data before sending. | no |
sending_queue > batch | Configures batching requests based on a timeout and a minimum number of items. | no |
The > symbol indicates deeper levels of nesting.
For example, client > tls refers to a tls block defined inside a client block.
client
RequiredThe client block configures the gRPC client used by the component.
The following arguments are supported:
| Name | Type | Description | Default | Required |
|---|---|---|---|---|
endpoint | string | host:port to send telemetry data to. | yes | |
auth | capsule(otelcol.Handler) | Handler from an otelcol.auth component to use for authenticating requests. | no | |
authority | string | Overrides the default :authority header in gRPC requests from the gRPC client. | no | |
balancer_name | string | Which gRPC client-side load balancer to use for requests. | "round_robin" | no |
compression | string | Compression mechanism to use for requests. | "gzip" | no |
headers | map(string) | Additional headers to send with the request. | {} | no |
read_buffer_size | string | Size of the read buffer the gRPC client to use for reading server responses. | no | |
wait_for_ready | boolean | Waits for gRPC connection to be in the READY state before sending data. | false | no |
write_buffer_size | string | Size of the write buffer the gRPC client to use for writing requests. | "512KiB" | no |
By default, requests are compressed with Gzip.
The compression argument controls which compression mechanism to use. Supported strings are:
"gzip""zlib""deflate""snappy""zstd"
If you set compression to "none" or an empty string "", the requests aren’t compressed.
The supported values for balancer_name are listed in the gRPC documentation on Load balancing:
pick_first: Tries to connect to the first address. It uses the address for all RPCs if it connects, or if it fails, it tries the next address and keeps trying until one connection is successful. Because of this, all the RPCs are sent to the same backend.round_robin: Connects to all the addresses it sees and sends an RPC to each backend one at a time in order. For example, the first RPC is sent to backend-1, the second RPC is sent to backend-2, and the third RPC is sent to backend-1.
The :authority header in gRPC specifies the host to which the request is being sent.
It’s similar to the Host header in HTTP requests.
By default, the value for :authority is derived from the endpoint URL used for the gRPC call.
Overriding :authority could be useful when routing traffic using a proxy like Envoy, which makes routing decisions based on the value of the :authority header.
An HTTP proxy can be configured through the following environment variables:
HTTPS_PROXYNO_PROXY
The HTTPS_PROXY environment variable specifies a URL to use for proxying requests.
Connections to the proxy are established via the HTTP CONNECT method.
The NO_PROXY environment variable is an optional list of comma-separated hostnames for which the HTTPS proxy should not be used.
Each hostname can be provided as an IP address (1.2.3.4), an IP address in CIDR notation (1.2.3.4/8), a domain name (example.com), or *.
A domain name matches that domain and all subdomains.
A domain name with a leading “.” (.example.com) matches subdomains only.
NO_PROXY is only read when HTTPS_PROXY is set.
Because otelcol.exporter.otlp uses gRPC, the configured proxy server must be able to handle and proxy HTTP/2 traffic.
keepalive
The keepalive block configures keepalive settings for gRPC client connections.
The following arguments are supported:
| Name | Type | Description | Default | Required |
|---|---|---|---|---|
ping_wait | duration | How often to ping the server after no activity. | no | |
ping_response_timeout | duration | Time to wait before closing inactive connections if the server doesn’t respond to a ping. | no | |
ping_without_stream | boolean | Send pings even if there is no active stream request. | no |
tls
The tls block configures TLS settings used for the connection to the gRPC
server.
The following arguments are supported:
| Name | Type | Description | Default | Required |
|---|---|---|---|---|
ca_file | string | Path to the CA file. | no | |
ca_pem | string | CA PEM-encoded text to validate the server with. | no | |
cert_file | string | Path to the TLS certificate. | no | |
cert_pem | string | Certificate PEM-encoded text for client authentication. | no | |
cipher_suites | list(string) | A list of TLS cipher suites that the TLS transport can use. | [] | no |
curve_preferences | list(string) | Set of elliptic curves to use in a handshake. | [] | no |
include_system_ca_certs_pool | boolean | Whether to load the system certificate authorities pool alongside the certificate authority. | false | no |
insecure_skip_verify | boolean | Ignores insecure server TLS certificates. | no | |
insecure | boolean | Disables TLS when connecting to the configured server. | no | |
key_file | string | Path to the TLS certificate key. | no | |
key_pem | secret | Key PEM-encoded text for client authentication. | no | |
max_version | string | Maximum acceptable TLS version for connections. | "TLS 1.3" | no |
min_version | string | Minimum acceptable TLS version for connections. | "TLS 1.2" | no |
reload_interval | duration | The duration after which the certificate is reloaded. | "0s" | no |
server_name | string | Verifies the hostname of server certificates when set. | no |
If the server doesn’t support TLS, you must set the insecure argument to true.
To disable tls for connections to the server, set the insecure argument to true.
If you set reload_interval to "0s", the certificate never reloaded.
The following pairs of arguments are mutually exclusive and can’t both be set simultaneously:
ca_pemandca_filecert_pemandcert_filekey_pemandkey_file
If cipher_suites is left blank, a safe default list is used.
Refer to the Go TLS documentation for a list of supported cipher suites.
The curve_preferences argument determines the set of elliptic curves to prefer during a handshake in preference order.
If not provided, a default list is used.
The set of elliptic curves available are X25519, P521, P256, and P384.
tpm
The tpm block configures retrieving the TLS key_file from a trusted device.
The following arguments are supported:
| Name | Type | Description | Default | Required |
|---|---|---|---|---|
auth | string | The authorization value used to authenticate the TPM device. | "" | no |
enabled | bool | Load the tls.key_file from TPM. | false | no |
owner_auth | string | The owner authorization value used to authenticate the TPM device. | "" | no |
path | string | Path to the TPM device or Unix domain socket. | "" | no |
The trusted platform module (TPM) configuration can be used for loading TLS key from TPM. Currently only TSS2 format is supported.
The path attribute is not supported on Windows.
Example
otelcol.example.component "<LABEL>" {
...
tls {
...
key_file = "my-tss2-key.key"
tpm {
enabled = true
path = "/dev/tpmrm0"
}
}
}In the above example, the private key my-tss2-key.key in TSS2 format will be loaded from the TPM device /dev/tmprm0.
Note
otelcol.exporter.otlpuses gRPC, which doesn’t allow you to send sensitive credentials likeauthover insecure channels. Sending sensitive credentials over insecure non-TLS connections is supported by non-gRPC exporters such asotelcol.exporter.otlphttp.
debug_metrics
The debug_metrics block configures the metrics that this component generates to monitor its state.
The following arguments are supported:
| Name | Type | Description | Default | Required |
|---|---|---|---|---|
disable_high_cardinality_metrics | boolean | Whether to disable certain high cardinality metrics. | true | no |
disable_high_cardinality_metrics is the Alloy equivalent to the telemetry.disableHighCardinalityMetrics feature gate in the OpenTelemetry Collector.
It removes attributes that could cause high cardinality metrics.
For example, attributes with IP addresses and port numbers in metrics about HTTP and gRPC connections are removed.
Note
If configured,
disable_high_cardinality_metricsonly applies tootelcol.exporter.*andotelcol.receiver.*components.
retry_on_failure
The retry_on_failure block configures how failed requests to the gRPC server are retried.
The following arguments are supported:
| Name | Type | Description | Default | Required |
|---|---|---|---|---|
enabled | boolean | Enables retrying failed requests. | true | no |
initial_interval | duration | Initial time to wait before retrying a failed request. | "5s" | no |
max_elapsed_time | duration | Maximum time to wait before discarding a failed batch. | "5m" | no |
max_interval | duration | Maximum time to wait between retries. | "30s" | no |
multiplier | number | Factor to grow wait time before retrying. | 1.5 | no |
randomization_factor | number | Factor to randomize wait time before retrying. | 0.5 | no |
When enabled is true, failed batches are retried after a given interval.
The initial_interval argument specifies how long to wait before the first retry attempt.
If requests continue to fail, the time to wait before retrying increases by the factor specified by the multiplier argument, which must be greater than 1.0.
The max_interval argument specifies the upper bound of how long to wait between retries.
The randomization_factor argument is useful for adding jitter between retrying Alloy instances.
If randomization_factor is greater than 0, the wait time before retries is multiplied by a random factor in the range [ I - randomization_factor * I, I + randomization_factor * I], where I is the current interval.
If a batch hasn’t been sent successfully, it’s discarded after the time specified by max_elapsed_time elapses.
If max_elapsed_time is set to "0s", failed requests are retried forever until they succeed.
sending_queue
The sending_queue block configures queueing and batching for the exporter.
The following arguments are supported:
| Name | Type | Description | Default | Required |
|---|---|---|---|---|
block_on_overflow | boolean | The behavior when the component’s TotalSize limit is reached. | false | no |
enabled | boolean | Enables a buffer before sending data to the client. | true | no |
num_consumers | number | Number of readers to send batches written to the queue in parallel. | 10 | no |
queue_size | number | Maximum number of unwritten batches allowed in the queue at the same time. | 1000 | no |
sizer | string | How the queue and batching is measured. | "requests" | no |
wait_for_result | boolean | Determines if incoming requests are blocked until the request is processed or not. | false | no |
storage | capsule(otelcol.Handler) | Handler from an otelcol.storage component to use to enable a persistent queue mechanism. | no |
The blocking argument is deprecated in favor of the block_on_overflow argument.
When block_on_overflow is true, the component will wait for space. Otherwise, operations will immediately return a retryable error.
When enabled is true, data is first written to an in-memory buffer before sending it to the configured server.
Batches sent to the component’s input exported field are added to the buffer as long as the number of unsent batches doesn’t exceed the configured queue_size.
queue_size determines how long an endpoint outage is tolerated.
Assuming 100 requests/second, the default queue size 1000 provides about 10 seconds of outage tolerance.
To calculate the correct value for queue_size, multiply the average number of outgoing requests per second by the time in seconds that outages are tolerated. A very high value can cause Out Of Memory (OOM) kills.
The sizer argument could be set to:
requests: number of incoming batches of metrics, logs, traces (the most performant option).items: number of the smallest parts of each signal (spans, metric data points, log records).bytes: the size of serialized data in bytes (the least performant option).
The num_consumers argument controls how many readers read from the buffer and send data in parallel.
Larger values of num_consumers allow data to be sent more quickly at the expense of increased network traffic.
If an otelcol.storage.* component is configured and provided in the queue’s storage argument, the queue uses the
provided storage extension to provide a persistent queue and the queue is no longer stored in memory.
Any data persisted will be processed on startup if Alloy is killed or restarted.
Refer to the
exporterhelper documentation in the OpenTelemetry Collector repository for more details.
batch
The batch block configures batching requests based on a timeout and a minimum number of items.
By default, the batch block is not used.
The following arguments are supported:
| Name | Type | Description | Default | Required |
|---|---|---|---|---|
flush_timeout | duration | Time after which a batch will be sent regardless of its size. Must be a non-zero value. | yes | |
min_size | number | The minimum size of a batch. | yes | |
max_size | number | The maximum size of a batch, enables batch splitting. | yes | |
sizer | string | How the queue and batching is measured. Overrides the sizer set at the sending_queue level for batching. | yes |
max_size must be greater than or equal to min_size.
The sizer argument can be set to:
items: The number of the smallest parts of each span, metric data point, or log record.bytes: the size of serialized data in bytes (the least performant option).
Exported fields
The following fields are exported and can be referenced by other components:
| Name | Type | Description |
|---|---|---|
input | otelcol.Consumer | A value that other components can use to send telemetry data to. |
input accepts otelcol.Consumer data for any telemetry signal (metrics, logs, or traces).
Component health
otelcol.exporter.otlp is only reported as unhealthy if given an invalid configuration.
Debug information
otelcol.exporter.otlp doesn’t expose any component-specific debug information.
Debug metrics
otelcol_exporter_queue_capacity(gauge): Fixed capacity of the retry queue (in batches)otelcol_exporter_queue_size(gauge): Current size of the retry queue (in batches)otelcol_exporter_send_failed_spans_total(counter): Number of spans in failed attempts to send to destination.otelcol_exporter_sent_spans_total(counter): Number of spans successfully sent to destination.rpc_client_duration_milliseconds(histogram): Measures the duration of inbound RPC.rpc_client_request_size_bytes(histogram): Measures size of RPC request messages (uncompressed).rpc_client_requests_per_rpc(histogram): Measures the number of messages received per RPC. Should be 1 for all non-streaming RPCs.rpc_client_response_size_bytes(histogram): Measures size of RPC response messages (uncompressed).rpc_client_responses_per_rpc(histogram): Measures the number of messages received per RPC. Should be 1 for all non-streaming RPCs.
Examples
The following examples show you how to create an exporter to send data to different destinations.
Send data to a local Tempo instance
You can create an exporter that sends your data to a local Grafana Tempo instance without TLS:
otelcol.exporter.otlp "tempo" {
client {
endpoint = "tempo:4317"
tls {
insecure = true
insecure_skip_verify = true
}
}
}Send data to a managed service
You can create an otlp exporter that sends your data to a managed service, for example, Grafana Cloud.
The Tempo username and Grafana Cloud API Key are injected in this example through environment variables.
otelcol.exporter.otlp "grafana_cloud_traces" {
client {
endpoint = "tempo-xxx.grafana.net/tempo:443"
auth = otelcol.auth.basic.grafana_cloud_traces.handler
}
}
otelcol.auth.basic "grafana_cloud_traces" {
username = sys.env("TEMPO_USERNAME")
password = sys.env("GRAFANA_CLOUD_API_KEY")
}Compatible components
otelcol.exporter.otlp has exports that can be consumed by the following components:
- Components that consume OpenTelemetry
otelcol.Consumer
Note
Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details.



