prometheus.scrape
prometheus.scrape configures a Prometheus scraping job for a given set of targets.
The scraped metrics are forwarded to the list of receivers passed in forward_to.
You can specify multiple prometheus.scrape components by giving them different labels.
Usage
prometheus.scrape "<LABEL>" {
targets = <TARGET_LIST>
forward_to = <RECEIVER_LIST>
}Arguments
The component configures and starts a new scrape job to scrape all the input targets. The list of arguments that can be used to configure the block is presented below.
The scrape job name defaults to the component’s unique identifier.
One of the following can be provided:
authorizationblockbasic_authblockbearer_token_fileargumentbearer_tokenargumentoauth2block
If conflicting attributes are passed, for example, defining both a bearer_token and bearer_token_file or configuring both basic_auth and oauth2 at the same time, the component reports an error.
You can use the following arguments with prometheus.scrape:
EXPERIMENTAL: The
honor_metadataargument is an experimental feature. Enabling it may increase resource consumption, particularly if a lot of metrics with different names are ingested. Not all downstream components may be compatible with Prometheus metadata yet. For example,otelcol.receiver.prometheusmay work, butprometheus.remote_writemay not. Support for more components will be added soon. Experimental features are subject to frequent breaking changes, and may be removed with no equivalent replacement. To enable and use an experimental feature, you must set thestability.levelflag toexperimental.
The scrape_protocols controls the preferred order of protocols to negotiate during a scrape.
The following values are supported:
OpenMetricsText0.0.1OpenMetricsText1.0.0PrometheusProtoPrometheusText0.0.4PrometheusText1.0.0
You can also use the scrape_fallback_protocol argument to specify a fallback protocol to use if the target does not provide a valid Content-Type header.
If you were using the deprecated enable_protobuf_negotiation argument, switch to using scrape_protocols = ["PrometheusProto", "OpenMetricsText1.0.0", "OpenMetricsText0.0.1", "PrometheusText0.0.4"] instead.
For now, native histograms are only available through the Prometheus Protobuf exposition format.
To scrape native histograms, scrape_native_histograms must be set to true and the first item in scrape_protocols must be PrometheusProto.
The default value for scrape_protocols changes to ["PrometheusProto", "OpenMetricsText1.0.0", "OpenMetricsText0.0.1", "PrometheusText1.0.0", "PrometheusText0.0.4"] when
scrape_native_histograms is set to true.
The metric_name_validation_scheme controls how metric names are validated. The following values are supported:
"utf8"- Uses UTF-8 validation scheme."legacy"- Uses legacy validation scheme which was default in Prometheus v2 (default).
The metric_name_escaping_scheme controls how metric names are escaped. The following values are supported:
"allow-utf-8"- Allows UTF-8 characters in metric names. No escaping is required. (default when validation scheme is “utf8”)"underscores"- Replaces all legacy-invalid characters with underscores (default when validation scheme is “legacy”)"dots"- Replaces all legacy-invalid characters with dots except that dots are converted to_dot_and pre-existing underscores are converted to__."values"- Prepends the name withU__and replaces all invalid characters with the unicode value, surrounded by underscores. Single underscores are replaced with double underscores.
Note: metric_name_escaping_scheme cannot be set to "allow-utf-8" while metric_name_validation_scheme is not set to "utf8".
no_proxy can contain IPs, CIDR notations, and domain names. IP and domain names can contain port numbers.
proxy_url must be configured if no_proxy is configured.
proxy_from_environment uses the environment variables HTTP_PROXY, HTTPS_PROXY, and NO_PROXY (or the lowercase versions thereof).
Requests use the proxy from the environment variable matching their scheme, unless excluded by NO_PROXY.
proxy_url and no_proxy must not be configured if proxy_from_environment is configured.
proxy_connect_header should only be configured if proxy_url or proxy_from_environment are configured.
track_timestamps_staleness controls whether Prometheus tracks staleness of metrics with an explicit timestamp present in scraped data.
An “explicit timestamp” is an optional timestamp in the Prometheus metrics exposition format. For example, this sample has a timestamp of
1395066363000:http_requests_total{method="post",code="200"} 1027 1395066363000If
track_timestamps_stalenessis set totrue, a staleness marker will be inserted when a metric is no longer present or the target is down.A “staleness marker” is just a sample with a specific NaN value which is reserved for internal use by Prometheus.
We recommend you set
track_timestamps_stalenesstotrueif the database where metrics are written to has enabled out of order ingestion.If
track_timestamps_stalenessis set tofalse, samples with explicit timestamps will only be labeled as stale after a certain time period, which in Prometheus is 5 minutes by default.
Blocks
You can use the following blocks with prometheus.scrape:
The > symbol indicates deeper levels of nesting.
For example, oauth2 > tls_config refers to a tls_config block defined inside an oauth2 block.
authorization
credential and credentials_file are mutually exclusive, and only one can be provided inside an authorization block.
Warning
Using
credentials_filecauses the file to be read on every outgoing request. Use thelocal.filecomponent with thecredentialsattribute instead to avoid unnecessary reads.
basic_auth
password and password_file are mutually exclusive, and only one can be provided inside a basic_auth block.
Warning
Using
password_filecauses the file to be read on every outgoing request. Use thelocal.filecomponent with thepasswordattribute instead to avoid unnecessary reads.
clustering
When Alloy is using clustering, and enabled is set to true, then this prometheus.scrape component instance opts-in to participating in the cluster to distribute scrape load between all cluster nodes.
Clustering assumes that all cluster nodes are running with the same configuration file, have access to the same service discovery APIs, and that all prometheus.scrape components that have opted-in to using clustering, over the course of a scrape interval, are converging on the same target set from
upstream components in their targets argument.
All prometheus.scrape components instances opting in to clustering use target labels and a consistent hashing algorithm to determine ownership for each of the targets between the cluster peers.
Then, each peer only scrapes the subset of targets that it’s responsible for, so that the scrape load is distributed.
When a node joins or leaves the cluster, every peer recalculates ownership and continues scraping with the new target set.
This performs better than hashmod sharding where all nodes have to be re-distributed, as only 1/N of the targets ownership is transferred, but is eventually consistent (rather than fully consistent like hashmod sharding is).
If Alloy is not running in clustered mode, then the block is a no-op and prometheus.scrape scrapes every target it receives in its arguments.
oauth2
client_secret and client_secret_file are mutually exclusive, and only one can be provided inside an oauth2 block.
Warning
Using
client_secret_filecauses the file to be read on every outgoing request. Use thelocal.filecomponent with theclient_secretattribute instead to avoid unnecessary reads.
The oauth2 block may also contain a separate tls_config sub-block.
no_proxy can contain IPs, CIDR notations, and domain names. IP and domain names can contain port numbers.
proxy_url must be configured if no_proxy is configured.
proxy_from_environment uses the environment variables HTTP_PROXY, HTTPS_PROXY, and NO_PROXY (or the lowercase versions thereof).
Requests use the proxy from the environment variable matching their scheme, unless excluded by NO_PROXY.
proxy_url and no_proxy must not be configured if proxy_from_environment is configured.
proxy_connect_header should only be configured if proxy_url or proxy_from_environment are configured.
tls_config
The following pairs of arguments are mutually exclusive and can’t both be set simultaneously:
ca_pemandca_filecert_pemandcert_filekey_pemandkey_file
When configuring client authentication, both the client certificate (using cert_pem or cert_file) and the client key (using key_pem or key_file) must be provided.
When min_version isn’t provided, the minimum acceptable TLS version is inherited from Go’s default minimum version, TLS 1.2.
If min_version is provided, it must be set to one of the following strings:
"TLS10"(TLS 1.0)"TLS11"(TLS 1.1)"TLS12"(TLS 1.2)"TLS13"(TLS 1.3)
Exported fields
prometheus.scrape doesn’t export any fields that can be referenced by other components.
Component health
prometheus.scrape is only reported as unhealthy if given an invalid configuration.
Debug information
prometheus.scrape reports the status of the last scrape for each configured scrape job on the component’s debug endpoint.
Debug metrics
prometheus_fanout_latency(histogram): Write latency for sending to direct and indirect components.prometheus_forwarded_samples_total(counter): Total number of samples sent to downstream components.prometheus_scrape_targets_gauge(gauge): Number of targets this component is configured to scrape.
Scraping behavior
The prometheus.scrape component borrows the scraping behavior of Prometheus.
Prometheus, and by extent this component, uses a pull model for scraping metrics from a given set of targets.
Each scrape target is defined as a set of key-value pairs called labels.
The set of targets can either be static, or dynamically provided periodically by a service discovery component such as discovery.kubernetes.
The special label __address__ must always be present and corresponds to the <host>:<port> that’s used for the scrape request.
By default, the scrape job tries to scrape all available targets’ /metrics endpoints using HTTP, with a scrape interval of 1 minute and scrape timeout of 10 seconds.
The metrics path, protocol scheme, scrape interval and timeout, query parameters, as well as any other settings can be configured using the component’s arguments.
If a target is hosted at the in-memory traffic address specified by the run command, prometheus.scrape scrapes the metrics in-memory, bypassing the network.
The scrape job expects the metrics exposed by the endpoint to follow the OpenMetrics format.
All metrics are then propagated to each receiver listed in the component’s forward_to argument.
Labels coming from targets, that start with a double underscore __ are treated as internal, and are removed prior to scraping.
The prometheus.scrape component regards a scrape as successful if it responded with an HTTP 200 OK status code and returned a body of valid metrics.
If the scrape request fails, the component’s debug UI section contains more detailed information about the failure, the last successful scrape, as well as the labels last used for scraping.
The following labels are automatically injected to the scraped time series and can help pin down a scrape target.
Similarly, these metrics that record the behavior of the scrape targets are also automatically available.
The up metric is particularly useful for monitoring and alerting on the health of a scrape job.
It’s set to 0 in case anything goes wrong with the scrape target, either because it’s not reachable, because the connection times out while scraping, or because the samples from the target couldn’t be processed.
When the target is behaving normally, the up metric is set to 1.
To enable scraping of Prometheus’ native histograms over gRPC, the scrape_protocols should specify PrometheusProto as the first protocol to negotiate, for example:
prometheus.scrape "prometheus" {
...
scrape_native_histograms = true
scrape_protocols = ["PrometheusProto", "OpenMetricsText1.0.0", "OpenMetricsText0.0.1", "PrometheusText0.0.4"]
}Thescrape_classic_histograms argument controls whether the component should also scrape the ‘classic’ histogram equivalent of a native histogram, if it’s present. It’s an equivalent to the always_scrape_classic_histograms argument in Prometheus v3.
Example
Set up scrape jobs for blackbox exporter targets
The following example sets up the scrape job with certain attributes (scrape endpoint, scrape interval, query parameters) and lets it scrape two instances of the blackbox exporter. The exposed metrics are sent over to the provided list of receivers, as defined by other components.
prometheus.scrape "blackbox_scraper" {
targets = [
{"__address__" = "blackbox-exporter:9115", "instance" = "one"},
{"__address__" = "blackbox-exporter:9116", "instance" = "two"},
]
forward_to = [prometheus.remote_write.grafanacloud.receiver, prometheus.remote_write.onprem.receiver]
scrape_interval = "10s"
params = { "target" = ["grafana.com"], "module" = ["http_2xx"] }
metrics_path = "/probe"
}The endpoints that are being scraped every 10 seconds are:
http://blackbox-exporter:9115/probe?target=grafana.com&module=http_2xx
http://blackbox-exporter:9116/probe?target=grafana.com&module=http_2xxAuthentication with the Kubernetes API server
The following example shows you how to authenticate with the Kubernetes API server.
prometheus.scrape "kubelet" {
scheme = "https"
tls_config {
server_name = "kubernetes"
ca_file = "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
insecure_skip_verify = false
}
bearer_token_file = "/var/run/secrets/kubernetes.io/serviceaccount/token"
}Technical details
prometheus.scrape supports gzip compression.
The following special labels can change the behavior of prometheus.scrape:
__address__: The name of the label that holds the<host>:<port>address of a scrape target.__metrics_path__: The name of the label that holds the path on which to scrape a target.__param_<name>: A prefix for labels that provide URL parameters<name>used to scrape a target.__scheme__: the name of the label that holds the protocol scheme (http,https) on which to scrape a target.__scrape_interval__: The name of the label that holds the scrape interval used to scrape a target.__scrape_timeout__: The name of the label that holds the scrape timeout used to scrape a target.
Special labels added after a scrape
__name__: The label name indicating the metric name of a timeseries.instance: The label name used for the instance label.job: The label name indicating the job from which a timeseries was scraped.
Compatible components
prometheus.scrape can accept arguments from the following components:
- Components that export Targets
- Components that export Prometheus
MetricsReceiver
Note
Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details.



