Menu

Caution

Grafana Alloy is the new name for our distribution of the OTel collector. Grafana Agent has been deprecated and is in Long-Term Support (LTS) through October 31, 2025. Grafana Agent will reach an End-of-Life (EOL) on November 1, 2025. Read more about why we recommend migrating to Grafana Alloy.

Important: This documentation is about an older version. It's relevant only to the release noted, many of the features and functions have been updated or replaced. Please view the current version.

Open source

prometheus.remote_write

prometheus.remote_write collects metrics sent from other components into a Write-Ahead Log (WAL) and forwards them over the network to a series of user-supplied endpoints. Metrics are sent over the network using the Prometheus Remote Write protocol.

Multiple prometheus.remote_write components can be specified by giving them different labels.

Usage

river
prometheus.remote_write "LABEL" {
  endpoint {
    url = REMOTE_WRITE_URL

    ...
  }

  ...
}

Arguments

The following arguments are supported:

NameTypeDescriptionDefaultRequired
external_labelsmap(string)Labels to add to metrics sent over the network.no

Blocks

The following blocks are supported inside the definition of prometheus.remote_write:

HierarchyBlockDescriptionRequired
endpointendpointLocation to send metrics to.no
endpoint > basic_authbasic_authConfigure basic_auth for authenticating to the endpoint.no
endpoint > authorizationauthorizationConfigure generic authorization to the endpoint.no
endpoint > oauth2oauth2Configure OAuth2 for authenticating to the endpoint.no
endpoint > oauth2 > tls_configtls_configConfigure TLS settings for connecting to the endpoint.no
endpoint > sigv4sigv4Configure AWS Signature Verification 4 for authenticating to the endpoint.no
endpoint > azureadazureadConfigure AzureAD for authenticating to the endpoint.no
endpoint > azuread > managed_identitymanaged_identityConfigure Azure user-assigned managed identity.yes
endpoint > tls_configtls_configConfigure TLS settings for connecting to the endpoint.no
endpoint > queue_configqueue_configConfiguration for how metrics are batched before sending.no
endpoint > metadata_configmetadata_configConfiguration for how metric metadata is sent.no
endpoint > write_relabel_configwrite_relabel_configConfiguration for write_relabel_config.no
walwalConfiguration for the component’s WAL.no

The > symbol indicates deeper levels of nesting. For example, endpoint > basic_auth refers to a basic_auth block defined inside an endpoint block.

endpoint block

The endpoint block describes a single location to send metrics to. Multiple endpoint blocks can be provided to send metrics to multiple locations.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
urlstringFull URL to send metrics to.yes
namestringOptional name to identify the endpoint in metrics.no
remote_timeoutdurationTimeout for requests made to the URL."30s"no
headersmap(string)Extra headers to deliver with the request.no
send_exemplarsboolWhether exemplars should be sent.trueno
send_native_histogramsboolWhether native histograms should be sent.falseno
bearer_token_filestringFile containing a bearer token to authenticate with.no
bearer_tokensecretBearer token to authenticate with.no
enable_http2boolWhether HTTP2 is supported for requests.trueno
follow_redirectsboolWhether redirects returned by the server should be followed.trueno
proxy_urlstringHTTP proxy to send requests through.no
no_proxystringComma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying.no
proxy_from_environmentboolUse the proxy URL indicated by environment variables.falseno
proxy_connect_headermap(list(secret))Specifies headers to send to proxies during CONNECT requests.no

At most, one of the following can be provided:

When multiple endpoint blocks are provided, metrics are concurrently sent to all configured locations. Each endpoint has a queue which is used to read metrics from the WAL and queue them for sending. The queue_config block can be used to customize the behavior of the queue.

Endpoints can be named for easier identification in debug metrics using the name argument. If the name argument isn’t provided, a name is generated based on a hash of the endpoint settings.

When send_native_histograms is true, native Prometheus histogram samples sent to prometheus.remote_write are forwarded to the configured endpoint. If the endpoint doesn’t support receiving native histogram samples, pushing metrics fails.

no_proxy can contain IPs, CIDR notations, and domain names. IP and domain names can contain port numbers. proxy_url must be configured if no_proxy is configured.

proxy_from_environment uses the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY (or the lowercase versions thereof). Requests use the proxy from the environment variable matching their scheme, unless excluded by NO_PROXY. proxy_url and no_proxy must not be configured if proxy_from_environment is configured.

proxy_connect_header should only be configured if proxy_url or proxy_from_environment are configured.

basic_auth block

NameTypeDescriptionDefaultRequired
password_filestringFile containing the basic auth password.no
passwordsecretBasic auth password.no
usernamestringBasic auth username.no

password and password_file are mutually exclusive, and only one can be provided inside a basic_auth block.

authorization block

NameTypeDescriptionDefaultRequired
credentials_filestringFile containing the secret value.no
credentialssecretSecret value.no
typestringAuthorization type, for example, “Bearer”.no

credential and credentials_file are mutually exclusive, and only one can be provided inside an authorization block.

oauth2 block

NameTypeDescriptionDefaultRequired
client_idstringOAuth2 client ID.no
client_secret_filestringFile containing the OAuth2 client secret.no
client_secretsecretOAuth2 client secret.no
endpoint_paramsmap(string)Optional parameters to append to the token URL.no
proxy_urlstringHTTP proxy to send requests through.no
no_proxystringComma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying.no
proxy_from_environmentboolUse the proxy URL indicated by environment variables.falseno
proxy_connect_headermap(list(secret))Specifies headers to send to proxies during CONNECT requests.no
scopeslist(string)List of scopes to authenticate with.no
token_urlstringURL to fetch the token from.no

client_secret and client_secret_file are mutually exclusive, and only one can be provided inside an oauth2 block.

The oauth2 block may also contain a separate tls_config sub-block.

no_proxy can contain IPs, CIDR notations, and domain names. IP and domain names can contain port numbers. proxy_url must be configured if no_proxy is configured.

proxy_from_environment uses the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY (or the lowercase versions thereof). Requests use the proxy from the environment variable matching their scheme, unless excluded by NO_PROXY. proxy_url and no_proxy must not be configured if proxy_from_environment is configured.

proxy_connect_header should only be configured if proxy_url or proxy_from_environment are configured.

sigv4 block

NameTypeDescriptionDefaultRequired
access_keystringAWS API access key.no
profilestringNamed AWS profile used to authenticate.no
regionstringAWS region.no
role_arnstringAWS Role ARN, an alternative to using AWS API keys.no
secret_keysecretAWS API secret key.no

If region is left blank, the region from the default credentials chain is used.

If access_key is left blank, the environment variable AWS_ACCESS_KEY_ID is used.

If secret_key is left blank, the environment variable AWS_SECRET_ACCESS_KEY is used.

azuread block

NameTypeDescriptionDefaultRequired
cloudstringThe Azure Cloud."AzurePublic"no

The supported values for cloud are:

  • "AzurePublic"
  • "AzureChina"
  • "AzureGovernment"

managed_identity block

NameTypeDescriptionDefaultRequired
client_idstringClient ID of the managed identity used to authenticate.yes

client_id should be a valid UUID in one of the supported formats:

  • xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
  • urn:uuid:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
  • Microsoft encoding: {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}
  • Raw hex encoding: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

tls_config block

NameTypeDescriptionDefaultRequired
ca_pemstringCA PEM-encoded text to validate the server with.no
ca_filestringCA certificate to validate the server with.no
cert_pemstringCertificate PEM-encoded text for client authentication.no
cert_filestringCertificate file for client authentication.no
insecure_skip_verifyboolDisables validation of the server certificate.no
key_filestringKey file for client authentication.no
key_pemsecretKey PEM-encoded text for client authentication.no
min_versionstringMinimum acceptable TLS version.no
server_namestringServerName extension to indicate the name of the server.no

The following pairs of arguments are mutually exclusive and can’t both be set simultaneously:

  • ca_pem and ca_file
  • cert_pem and cert_file
  • key_pem and key_file

When configuring client authentication, both the client certificate (using cert_pem or cert_file) and the client key (using key_pem or key_file) must be provided.

When min_version is not provided, the minimum acceptable TLS version is inherited from Go’s default minimum version, TLS 1.2. If min_version is provided, it must be set to one of the following strings:

  • "TLS10" (TLS 1.0)
  • "TLS11" (TLS 1.1)
  • "TLS12" (TLS 1.2)
  • "TLS13" (TLS 1.3)

queue_config block

NameTypeDescriptionDefaultRequired
capacitynumberNumber of samples to buffer per shard.10000no
min_shardsnumberMinimum amount of concurrent shards sending samples to the endpoint.1no
max_shardsnumberMaximum number of concurrent shards sending samples to the endpoint.50no
max_samples_per_sendnumberMaximum number of samples per send.2000no
batch_send_deadlinedurationMaximum time samples will wait in the buffer before sending."5s"no
min_backoffdurationInitial retry delay. The backoff time gets doubled for each retry."30ms"no
max_backoffdurationMaximum retry delay."5s"no
retry_on_http_429boolRetry when an HTTP 429 status code is received.trueno
sample_age_limitdurationMaximum age of samples to send."0s"no

Each queue then manages a number of concurrent shards which is responsible for sending a fraction of data to their respective endpoints. The number of shards is automatically raised if samples are not being sent to the endpoint quickly enough. The range of permitted shards can be configured with the min_shards and max_shards arguments.

Each shard has a buffer of samples it will keep in memory, controlled with the capacity argument. New metrics aren’t read from the WAL unless there is at least one shard that is not at maximum capacity.

The buffer of a shard is flushed and sent to the endpoint either after the shard reaches the number of samples specified by max_samples_per_send or the duration specified by batch_send_deadline has elapsed since the last flush for that shard.

Shards retry requests which fail due to a recoverable error. An error is recoverable if the server responds with an HTTP 5xx status code. The delay between retries can be customized with the min_backoff and max_backoff arguments.

The retry_on_http_429 argument specifies whether HTTP 429 status code responses should be treated as recoverable errors; other HTTP 4xx status code responses are never considered recoverable errors. When retry_on_http_429 is enabled, Retry-After response headers from the servers are honored.

The sample_age_limit argument specifies the maximum age of samples to send. Any samples older than the limit are dropped and won’t be sent to the remote storage. The default value is 0s, which means that all samples are sent (feature is disabled).

metadata_config block

NameTypeDescriptionDefaultRequired
sendboolControls whether metric metadata is sent to the endpoint.trueno
send_intervaldurationHow frequently metric metadata is sent to the endpoint."1m"no
max_samples_per_sendnumberMaximum number of metadata samples to send to the endpoint at once.2000no

write_relabel_config block

The write_relabel_config block contains the definition of any relabeling rules that can be applied to an input metric. If more than one write_relabel_config block is defined, the transformations are applied in top-down order.

The following arguments can be used to configure a write_relabel_config. All arguments are optional. Omitted fields take their default values.

NameTypeDescriptionDefaultRequired
actionstringThe relabeling action to perform.replaceno
modulusuintA positive integer used to calculate the modulus of the hashed source label values.no
regexstringA valid RE2 expression with support for parenthesized capture groups. Used to match the extracted value from the combination of the source_label and separator fields or filter labels during the labelkeep/labeldrop/labelmap actions.(.*)no
replacementstringThe value against which a regular expression replace is performed, if the regular expression matches the extracted value. Supports previously captured groups."$1"no
separatorstringThe separator used to concatenate the values present in source_labels.;no
source_labelslist(string)The list of labels whose values are to be selected. Their content is concatenated using the separator and matched against regex.no
target_labelstringLabel to which the resulting value will be written to.no

You can use the following actions:

  • drop - Drops metrics where regex matches the string extracted using the source_labels and separator.
  • dropequal - Drop targets for which the concatenated source_labels do match target_label.
  • hashmod - Hashes the concatenated labels, calculates its modulo modulus and writes the result to the target_label.
  • keep - Keeps metrics where regex matches the string extracted using the source_labels and separator.
  • keepequal - Drop targets for which the concatenated source_labels do not match target_label.
  • labeldrop - Matches regex against all label names. Any labels that match are removed from the metric’s label set.
  • labelkeep - Matches regex against all label names. Any labels that don’t match are removed from the metric’s label set.
  • labelmap - Matches regex against all label names. Any labels that match are renamed according to the contents of the replacement field.
  • lowercase - Sets target_label to the lowercase form of the concatenated source_labels.
  • replace - Matches regex to the concatenated labels. If there’s a match, it replaces the content of the target_label using the contents of the replacement field.
  • uppercase - Sets target_label to the uppercase form of the concatenated source_labels.

Note

The regular expression capture groups can be referred to using either the $CAPTURE_GROUP_NUMBER or ${CAPTURE_GROUP_NUMBER} notation.

wal block

The wal block customizes the Write-Ahead Log (WAL) used to temporarily store metrics before they are sent to the configured set of endpoints.

NameTypeDescriptionDefaultRequired
truncate_frequencydurationHow frequently to clean up the WAL."2h"no
min_keepalive_timedurationMinimum time to keep data in the WAL before it can be removed."5m"no
max_keepalive_timedurationMaximum time to keep data in the WAL before removing it."8h"no

The WAL serves two primary purposes:

  • Buffer unsent metrics in case of intermittent network issues.
  • Populate in-memory cache after a process restart.

The WAL is located inside a component-specific directory relative to the storage path Grafana Agent Flow is configured to use. See the agent run documentation for how to change the storage path.

The truncate_frequency argument configures how often to clean up the WAL. Every time the truncate_frequency period elapses, the lower two-thirds of data is removed from the WAL and is no available for sending.

When a WAL clean-up starts, the lowest successfully sent timestamp is used to determine how much data is safe to remove from the WAL. The min_keepalive_time and max_keepalive_time control the permitted age range of data in the WAL; samples aren’t removed until they are at least as old as min_keepalive_time, and samples are forcibly removed if they are older than max_keepalive_time.

Exported fields

The following fields are exported and can be referenced by other components:

NameTypeDescription
receiverMetricsReceiverA value which other components can use to send metrics to.

Component health

prometheus.remote_write is only reported as unhealthy if given an invalid configuration. In those cases, exported fields are kept at their last healthy values.

Debug information

prometheus.remote_write does not expose any component-specific debug information.

Debug metrics

  • agent_wal_storage_active_series (gauge): Current number of active series being tracked by the WAL.
  • agent_wal_storage_deleted_series (gauge): Current number of series marked for deletion from memory.
  • agent_wal_out_of_order_samples_total (counter): Total number of out of order samples ingestion failed attempts.
  • agent_wal_storage_created_series_total (counter): Total number of created series appended to the WAL.
  • agent_wal_storage_removed_series_total (counter): Total number of series removed from the WAL.
  • agent_wal_samples_appended_total (counter): Total number of samples appended to the WAL.
  • agent_wal_exemplars_appended_total (counter): Total number of exemplars appended to the WAL.
  • prometheus_remote_storage_samples_total (counter): Total number of samples sent to remote storage.
  • prometheus_remote_storage_exemplars_total (counter): Total number of exemplars sent to remote storage.
  • prometheus_remote_storage_metadata_total (counter): Total number of metadata entries sent to remote storage.
  • prometheus_remote_storage_samples_failed_total (counter): Total number of samples that failed to send to remote storage due to non-recoverable errors.
  • prometheus_remote_storage_exemplars_failed_total (counter): Total number of exemplars that failed to send to remote storage due to non-recoverable errors.
  • prometheus_remote_storage_metadata_failed_total (counter): Total number of metadata entries that failed to send to remote storage due to non-recoverable errors.
  • prometheus_remote_storage_samples_retries_total (counter): Total number of samples that failed to send to remote storage but were retried due to recoverable errors.
  • prometheus_remote_storage_exemplars_retried_total (counter): Total number of exemplars that failed to send to remote storage but were retried due to recoverable errors.
  • prometheus_remote_storage_metadata_retried_total (counter): Total number of metadata entries that failed to send to remote storage but were retried due to recoverable errors.
  • prometheus_remote_storage_samples_dropped_total (counter): Total number of samples which were dropped after being read from the WAL before being sent to remote_write because of an unknown reference ID.
  • prometheus_remote_storage_exemplars_dropped_total (counter): Total number of exemplars which were dropped after being read from the WAL before being sent to remote_write because of an unknown reference ID.
  • prometheus_remote_storage_enqueue_retries_total (counter): Total number of times enqueue has failed because a shard’s queue was full.
  • prometheus_remote_storage_sent_batch_duration_seconds (histogram): Duration of send calls to remote storage.
  • prometheus_remote_storage_queue_highest_sent_timestamp_seconds (gauge): Unix timestamp of the latest WAL sample successfully sent by a queue.
  • prometheus_remote_storage_samples_pending (gauge): The number of samples pending in shards to be sent to remote storage.
  • prometheus_remote_storage_exemplars_pending (gauge): The number of exemplars pending in shards to be sent to remote storage.
  • prometheus_remote_storage_shard_capacity (gauge): The capacity of shards within a given queue.
  • prometheus_remote_storage_shards (gauge): The number of shards used for concurrent delivery of metrics to an endpoint.
  • prometheus_remote_storage_shards_min (gauge): The minimum number of shards a queue is allowed to run.
  • prometheus_remote_storage_shards_max (gauge): The maximum number of a shards a queue is allowed to run.
  • prometheus_remote_storage_shards_desired (gauge): The number of shards a queue wants to run to be able to keep up with the amount of incoming metrics.
  • prometheus_remote_storage_bytes_total (counter): Total number of bytes of data sent by queues after compression.
  • prometheus_remote_storage_metadata_bytes_total (counter): Total number of bytes of metadata sent by queues after compression.
  • prometheus_remote_storage_max_samples_per_send (gauge): The maximum number of samples each shard is allowed to send in a single request.
  • prometheus_remote_storage_samples_in_total (counter): Samples read into remote storage.
  • prometheus_remote_storage_exemplars_in_total (counter): Exemplars read into remote storage.

Examples

The following examples show you how to create prometheus.remote_write components that send metrics to different destinations.

Send metrics to a local Mimir instance

You can create a prometheus.remote_write component that sends your metrics to a local Mimir instance:

river
prometheus.remote_write "staging" {
  // Send metrics to a locally running Mimir.
  endpoint {
    url = "http://mimir:9009/api/v1/push"

    basic_auth {
      username = "example-user"
      password = "example-password"
    }
  }
}

// Configure a prometheus.scrape component to send metrics to
// prometheus.remote_write component.
prometheus.scrape "demo" {
  targets = [
    // Collect metrics from the default HTTP listen address.
    {"__address__" = "127.0.0.1:12345"},
  ]
  forward_to = [prometheus.remote_write.staging.receiver]
}

Send metrics to a Mimir instance with a tenant specified

You can create a prometheus.remote_write component that sends your metrics to a specific tenant within the Mimir instance. This is useful when your Mimir instance is using more than one tenant:

river
prometheus.remote_write "staging" {
  // Send metrics to a Mimir instance
  endpoint {
    url = "http://mimir:9009/api/v1/push"

    headers = {
      "X-Scope-OrgID" = "staging",
    }
  }
}

Send metrics to a managed service

You can create a prometheus.remote_write component that sends your metrics to a managed service, for example, Grafana Cloud. The Prometheus username and the Grafana Cloud API Key are injected in this example through environment variables.

river
prometheus.remote_write "default" {
  endpoint {
    url = "https://prometheus-xxx.grafana.net/api/prom/push"
      basic_auth {
        username = env("PROMETHEUS_USERNAME")
        password = env("GRAFANA_CLOUD_API_KEY")
      }
  }
}

Technical details

prometheus.remote_write uses snappy for compression.

Any labels that start with __ will be removed before sending to the endpoint.

Data retention

The prometheus.remote_write component uses a Write Ahead Log (WAL) to prevent data loss during network outages. The component buffers the received metrics in a WAL for each configured endpoint. The queue shards can use the WAL after the network outage is resolved and flush the buffered metrics to the endpoints.

The WAL records metrics in 128 MB files called segments. To avoid having a WAL that grows on-disk indefinitely, the component truncates its segments on a set interval.

On each truncation, the WAL deletes references to series that are no longer present and also checkpoints roughly the oldest two thirds of the segments (rounded down to the nearest integer) written to it since the last truncation period. A checkpoint means that the WAL only keeps track of the unique identifier for each existing metrics series, and can no longer use the samples for remote writing. If that data has not yet been pushed to the remote endpoint, it is lost.

This behavior dictates the data retention for the prometheus.remote_write component. It also means that it’s impossible to directly correlate data retention directly to the data age itself, as the truncation logic works on segments, not the samples themselves. This makes data retention less predictable when the component receives a non-consistent rate of data.

The WAL block in Flow mode, or the metrics config in Static mode contain some configurable parameters that can be used to control the tradeoff between memory usage, disk usage, and data retention.

The truncate_frequency or wal_truncate_frequency parameter configures the interval at which truncations happen. A lower value leads to reduced memory usage, but also provides less resiliency to long outages.

When a WAL clean-up starts, the most recently successfully sent timestamp is used to determine how much data is safe to remove from the WAL. The min_keepalive_time or min_wal_time controls the minimum age of samples considered for removal. No samples more recent than min_keepalive_time are removed. The max_keepalive_time or max_wal_time controls the maximum age of samples that can be kept in the WAL. Samples older than max_keepalive_time are forcibly removed.

Extended remote_write outages

When the remote write endpoint is unreachable over a period of time, the most recent successfully sent timestamp is not updated. The min_keepalive_time and max_keepalive_time arguments control the age range of data kept in the WAL.

If the remote write outage is longer than the max_keepalive_time parameter, then the WAL is truncated, and the oldest data is lost.

Intermittent remote_write outages

If the remote write endpoint is intermittently reachable, the most recent successfully sent timestamp is updated whenever the connection is successful. A successful connection updates the series’ comparison with min_keepalive_time and triggers a truncation on the next truncate_frequency interval which checkpoints two thirds of the segments (rounded down to the nearest integer) written since the previous truncation.

Falling behind

If the queue shards cannot flush data quickly enough to keep up-to-date with the most recent data buffered in the WAL, we say that the component is ‘falling behind’. It’s not unusual for the component to temporarily fall behind 2 or 3 scrape intervals. If the component falls behind more than one third of the data written since the last truncate interval, it is possible for the truncate loop to checkpoint data before being pushed to the remote_write endpoint.

WAL corruption

WAL corruption can occur when Grafana Agent unexpectedly stops while the latest WAL segments are still being written to disk. For example, the host computer has a general disk failure and crashes before you can stop Grafana Agent and other running services. When you restart Grafana Agent, it verifies the WAL, removing any corrupt segments it finds. Sometimes, this repair is unsuccessful, and you must manually delete the corrupted WAL to continue.

If the WAL becomes corrupted, Grafana Agent writes error messages such as err="failed to find segment for index" to the log file.

Note

Deleting a WAL segment or a WAL file permanently deletes the stored WAL data.

To delete the corrupted WAL:

  1. Stop Grafana Agent.

  2. Find and delete the contents of the wal directory.

    By default the wal directory is a subdirectory of the data-agent directory located in the Grafana Agent working directory. The WAL data directory may be different than the default depending on the wal_directory setting in your Static configuration file or the path specified by the Flow command line flag --storage-path.

    Note

    There is one wal directory per:

    • Metrics instance running in Static mode
    • prometheus.remote_write component running in Flow mode
  3. Start Grafana Agent and verify that the WAL is working correctly.

Compatible components

prometheus.remote_write has exports that can be consumed by the following components:

Note

Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details.