Menu

Caution

Grafana Alloy is the new name for our distribution of the OTel collector. Grafana Agent has been deprecated and is in Long-Term Support (LTS) through October 31, 2025. Grafana Agent will reach an End-of-Life (EOL) on November 1, 2025. Read more about why we recommend migrating to Grafana Alloy.

Important: This documentation is about an older version. It's relevant only to the release noted, many of the features and functions have been updated or replaced. Please view the current version.

Open source

otelcol.exporter.otlphttp

otelcol.exporter.otlphttp accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol.

NOTE: otelcol.exporter.otlphttp is a wrapper over the upstream OpenTelemetry Collector otlphttp exporter. Bug reports or feature requests will be redirected to the upstream repository, if necessary.

Multiple otelcol.exporter.otlphttp components can be specified by giving them different labels.

Usage

river
otelcol.exporter.otlphttp "LABEL" {
  client {
    endpoint = "HOST:PORT"
  }
}

Arguments

otelcol.exporter.otlphttp supports the following arguments:

NameTypeDescriptionDefaultRequired
metrics_endpointstringThe endpoint to send metrics to.client.endpoint + "/v1/metrics"no
logs_endpointstringThe endpoint to send logs to.client.endpoint + "/v1/logs"no
traces_endpointstringThe endpoint to send traces to.client.endpoint + "/v1/traces"no

The default value depends on the endpoint field set in the required client block. If set, these arguments override the client.endpoint field for the corresponding signal.

Blocks

The following blocks are supported inside the definition of otelcol.exporter.otlphttp:

HierarchyBlockDescriptionRequired
clientclientConfigures the HTTP server to send telemetry data to.yes
client > tlstlsConfigures TLS for the HTTP client.no
sending_queuesending_queueConfigures batching of data before sending.no
retry_on_failureretry_on_failureConfigures retry mechanism for failed requests.no
debug_metricsdebug_metricsConfigures the metrics that this component generates to monitor its state.no

The > symbol indicates deeper levels of nesting. For example, client > tls refers to a tls block defined inside a client block.

client block

The client block configures the HTTP client used by the component.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
endpointstringThe target URL to send telemetry data to.yes
read_buffer_sizestringSize of the read buffer the HTTP client uses for reading server responses.0no
write_buffer_sizestringSize of the write buffer the HTTP client uses for writing requests."512KiB"no
timeoutdurationTime to wait before marking a request as failed."30s"no
headersmap(string)Additional headers to send with the request.{}no
compressionstringCompression mechanism to use for requests."gzip"no
max_idle_connsintLimits the number of idle HTTP connections the client can keep open.100no
max_idle_conns_per_hostintLimits the number of idle HTTP connections the host can keep open.0no
max_conns_per_hostintLimits the total (dialing,active, and idle) number of connections per host.0no
idle_conn_timeoutdurationTime to wait before an idle connection closes itself."90s"no
disable_keep_alivesboolDisable HTTP keep-alive.falseno
authcapsule(otelcol.Handler)Handler from an otelcol.auth component to use for authenticating requests.no

Setting disable_keep_alives to true will result in significant overhead establishing a new HTTP(s) connection for every request. Before enabling this option, consider whether changes to idle connection settings can achieve your goal.

By default, requests are compressed with gzip. The compression argument controls which compression mechanism to use. Supported strings are:

  • "gzip"
  • "zlib"
  • "deflate"
  • "snappy"
  • "zstd"

If compression is set to "none" or an empty string "", no compression is used.

tls block

The tls block configures TLS settings used for the connection to the HTTP server.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
ca_filestringPath to the CA file.no
ca_pemstringCA PEM-encoded text to validate the server with.no
cert_filestringPath to the TLS certificate.no
cert_pemstringCertificate PEM-encoded text for client authentication.no
insecure_skip_verifybooleanIgnores insecure server TLS certificates.no
insecurebooleanDisables TLS when connecting to the configured server.no
key_filestringPath to the TLS certificate key.no
key_pemsecretKey PEM-encoded text for client authentication.no
max_versionstringMaximum acceptable TLS version for connections."TLS 1.3"no
min_versionstringMinimum acceptable TLS version for connections."TLS 1.2"no
reload_intervaldurationThe duration after which the certificate is reloaded."0s"no
server_namestringVerifies the hostname of server certificates when set.no

If the server doesn’t support TLS, you must set the insecure argument to true.

To disable tls for connections to the server, set the insecure argument to true.

If reload_interval is set to "0s", the certificate never reloaded.

The following pairs of arguments are mutually exclusive and can’t both be set simultaneously:

  • ca_pem and ca_file
  • cert_pem and cert_file
  • key_pem and key_file

sending_queue block

The sending_queue block configures an in-memory buffer of batches before data is sent to the HTTP server.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
enabledbooleanEnables an in-memory buffer before sending data to the client.trueno
num_consumersnumberNumber of readers to send batches written to the queue in parallel.10no
queue_sizenumberMaximum number of unwritten batches allowed in the queue at the same time.5000no

When enabled is true, data is first written to an in-memory buffer before sending it to the configured server. Batches sent to the component’s input exported field are added to the buffer as long as the number of unsent batches doesn’t exceed the configured queue_size.

queue_size determines how long an endpoint outage is tolerated. Assuming 100 requests/second, the default queue size 5000 provides about 50 seconds of outage tolerance. To calculate the correct value for queue_size, multiply the average number of outgoing requests per second by the time in seconds that outages are tolerated.

The num_consumers argument controls how many readers read from the buffer and send data in parallel. Larger values of num_consumers allow data to be sent more quickly at the expense of increased network traffic.

retry_on_failure block

The retry_on_failure block configures how failed requests to the HTTP server are retried.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
enabledbooleanEnables retrying failed requests.trueno
initial_intervaldurationInitial time to wait before retrying a failed request."5s"no
max_elapsed_timedurationMaximum time to wait before discarding a failed batch."5m"no
max_intervaldurationMaximum time to wait between retries."30s"no
multipliernumberFactor to grow wait time before retrying.1.5no
randomization_factornumberFactor to randomize wait time before retrying.0.5no

When enabled is true, failed batches are retried after a given interval. The initial_interval argument specifies how long to wait before the first retry attempt. If requests continue to fail, the time to wait before retrying increases by the factor specified by the multiplier argument, which must be greater than 1.0. The max_interval argument specifies the upper bound of how long to wait between retries.

The randomization_factor argument is useful for adding jitter between retrying agents. If randomization_factor is greater than 0, the wait time before retries is multiplied by a random factor in the range [ I - randomization_factor * I, I + randomization_factor * I], where I is the current interval.

If a batch hasn’t been sent successfully, it is discarded after the time specified by max_elapsed_time elapses. If max_elapsed_time is set to "0s", failed requests are retried forever until they succeed.

debug_metrics block

The debug_metrics block configures the metrics that this component generates to monitor its state.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
disable_high_cardinality_metricsbooleanWhether to disable certain high cardinality metrics.falseno

disable_high_cardinality_metrics is the Grafana Agent equivalent to the telemetry.disableHighCardinalityMetrics feature gate in the OpenTelemetry Collector. It removes attributes that could cause high cardinality metrics. For example, attributes with IP addresses and port numbers in metrics about HTTP and gRPC connections are removed.

Exported fields

The following fields are exported and can be referenced by other components:

NameTypeDescription
inputotelcol.ConsumerA value that other components can use to send telemetry data to.

input accepts otelcol.Consumer data for any telemetry signal (metrics, logs, or traces).

Component health

otelcol.exporter.otlphttp is only reported as unhealthy if given an invalid configuration.

Debug information

otelcol.exporter.otlphttp does not expose any component-specific debug information.

Example

This example creates an exporter to send data to a locally running Grafana Tempo without TLS:

river
otelcol.exporter.otlphttp "tempo" {
    client {
        endpoint = "http://tempo:4317"
        tls {
            insecure             = true
            insecure_skip_verify = true
        }
    }
}