Grafana Agent FlowReferenceComponentsprometheus.scrape


prometheus.scrape configures a Prometheus scraping job for a given set of targets. The scraped metrics are forwarded to the list of receivers passed in forward_to.

Multiple prometheus.scrape components can be specified by giving them different labels.


prometheus.scrape "LABEL" {
  targets    = TARGET_LIST
  forward_to = RECEIVER_LIST


The component configures and starts a new scrape job to scrape all of the input targets. The list of arguments that can be used to configure the block is presented below.

The scrape job name defaults to the component’s unique identifier.

Any omitted fields take on their default values. In case that conflicting attributes are being passed (eg. defining both a BearerToken and BearerTokenFile or configuring both Basic Authorization and OAuth2 at the same time), the component reports an error.

The following arguments are supported:

targetslist(map(string))List of targets to scrape.yes
forward_tolist(MetricsReceiver)List of receivers to send scraped metrics to.yes
job_namestringThe job name to override the job label with.component nameno
extra_metricsboolWhether extra metrics should be generated for scrape targets.falseno
honor_labelsboolIndicator whether the scraped metrics should remain unmodified.falseno
honor_timestampsboolIndicator whether the scraped timestamps should be respected.trueno
paramsmap(list(string))A set of query parameters with which the target is
scrape_intervaldurationHow frequently to scrape the targets of this scrape config."60s"no
scrape_timeoutdurationThe timeout for scraping targets of this config."10s"no
metrics_pathstringThe HTTP resource path on which to fetch metrics from targets./metricsno
schemestringThe URL scheme with which to fetch metrics from
body_size_limitintAn uncompressed response body larger than this many bytes causes the scrape to fail. 0 means no
sample_limituintMore than this many samples post metric-relabeling causes the scrape to failno
target_limituintMore than this many targets after the target relabeling causes the scrapes to
label_limituintMore than this many labels post metric-relabeling causes the scrape to
label_name_length_limituintMore than this label name length post metric-relabeling causes the scrape to
label_value_length_limituintMore than this label value length post metric-relabeling causes the scrape to


The following blocks are supported inside the definition of prometheus.remote_write:

http_client_confighttp_client_configHTTP client settings when connecting to
http_client_config > basic_authbasic_authConfigure basic_auth for authenticating to
http_client_config > authorizationauthorizationConfigure generic authorization to
http_client_config > oauth2oauth2Configure OAuth2 for authenticating to
http_client_config > oauth2 > tls_configtls_configConfigure TLS settings for connecting to targets via
http_client_config > tls_configtls_configConfigure TLS settings for connecting to

The > symbol indicates deeper levels of nesting. For example, http_client_config > basic_auth refers to a basic_auth block defined inside an http_client_config block.

http_client_config block

The http_client_config block configures settings used to connect to the Kubernetes API server.

bearer_tokensecretBearer token to authenticate
bearer_token_filestringFile containing a bearer token to authenticate
proxy_urlstringHTTP proxy to proxy requests
follow_redirectsboolWhether redirects returned by the server should be followed.trueno
enable_http_2boolWhether HTTP2 is supported for requests.trueno

bearer_token, bearer_token_file, basic_auth, authorization, and oauth2 are mutually exclusive and only one can be provided inside of a http_client_config block.

basic_auth block

usernamestringBasic auth
passwordsecretBasic auth
password_filestringFile containing the basic auth

password and password_file are mututally exclusive and only one can be provided inside of a basic_auth block.

authorization block

typestringAuthorization type, for example, “Bearer”.no
credentials_filestringFile containing the secret

credential and credentials_file are mututally exclusive and only one can be provided inside of an authorization block.

oauth2 block

client_idstringOAuth2 client
client_secretsecretOAuth2 client
client_secret_filestringFile containing the OAuth2 client
scopeslist(string)List of scopes to authenticate
token_urlstringURL to fetch the token
endpoint_paramsmap(string)Optional parameters to append to the token
proxy_urlstringOptional proxy URL for OAuth2

client_secret and client_secret_file are mututally exclusive and only one can be provided inside of an oauth2 block.

The oauth2 block may also contain its own separate tls_config sub-block.

tls_config block

ca_filestringCA certificate to validate the server
cert_filestringCertificate file for client
key_filestringKey file for client
server_namestringServerName extension to indicate the name of the
insecure_skip_verifyboolDisables validation of the server
min_versionstringMinimum acceptable TLS

When min_version is not provided, the minimum acceptable TLS version is inherited from Go’s default minimum version, TLS 1.2. If min_version is provided, it must be set to one of the following strings:

  • "TLS10" (TLS 1.0)
  • "TLS11" (TLS 1.1)
  • "TLS12" (TLS 1.2)
  • "TLS13" (TLS 1.3)

Exported fields

prometheus.scrape does not export any fields that can be referenced by other components.

Component health

prometheus.scrape is only reported as unhealthy if given an invalid configuration.

Debug information

prometheus.scrape reports the status of the last scrape for each configured scrape job on the component’s debug endpoint.

Debug metrics

prometheus.scrape does not expose any component-specific debug metrics.

Scraping behavior

The prometheus.scrape component borrows the scraping behavior of Prometheus. Prometheus, and by extent this component, uses a pull model for scraping metrics from a given set of targets. Each scrape target is defined as a set of key-value pairs called labels. The set of targets can either be static, or dynamically provided periodically by a service discovery component such as discovery.kubernetes. The special label __address__ must always be present and corresponds to the <host>:<port> that is used for the scrape request.

By default, the scrape job tries to scrape all available targets’ /metrics endpoints using HTTP, with a scrape interval of 1 minute and scrape timeout of 10 seconds. The metrics path, protocol scheme, scrape interval and timeout, query parameters, as well as any other settings can be configured using the component’s arguments.

The scrape job expects the metrics exposed by the endpoint to follow the OpenMetrics format. All metrics are then propagated to each receiver listed in the component’s forward_to argument.

Labels coming from targets, that start with a double underscore __ are treated as internal, and are removed prior to scraping.

The prometheus.scrape component regards a scrape as successful if it responded with an HTTP 200 OK status code and returned a body of valid metrics.

If the scrape request fails, the component’s debug UI section contains more detailed information about the failure, the last successful scrape, as well as the labels last used for scraping.

The following labels are automatically injected to the scraped time series and can help pin down a scrape target.

jobThe configured job name that the target belongs to. Defaults to the fully formed component name.
instanceThe __address__ or <host>:<port> of the scrape target’s URL.

Similarly, these metrics that record the behavior of the scrape targets are also automatically available.

Metric NameDescription
up1 if the instance is healthy and reachable, or 0 if the scrape failed.
scrape_duration_secondsDuration of the scrape in seconds.
scrape_samples_scrapedThe number of samples the target exposed.
scrape_samples_post_metric_relabelingThe number of samples remaining after metric relabeling was applied.
scrape_series_addedThe approximate number of new series in this scrape.
scrape_timeout_secondsThe configured scrape timeout for a target. Useful for measuring how close a target was to timing out using scrape_duration_seconds / scrape_timeout_seconds
scrape_sample_limitThe configured sample limit for a target. Useful for measuring how close a target was to reaching the sample limit using scrape_samples_post_metric_relabeling / (scrape_sample_limit > 0)
scrape_body_size_bytesThe uncompressed size of the most recent scrape response, if successful. Scrapes failing because the body_size_limit is exceeded report -1, other scrape failures report 0.

The up metric is particularly useful for monitoring and alerting on the health of a scrape job. It is set to 0 in case anything goes wrong with the scrape target, either because it is not reachable, because the connection times out while scraping, or because the samples from the target could not be processed. When the target is behaving normally, the up metric is set to 1.


The following example sets up the scrape job with certain attributes (scrape endpoint, scrape interval, query parameters) and lets it scrape two instances of the blackbox exporter. The exposed metrics are sent over to the provided list of receivers, as defined by other components.

prometheus.scrape "blackbox_scraper" {
  targets = [
    {"__address__" = "blackbox-exporter:9115", "instance" = "one"},
    {"__address__" = "blackbox-exporter:9116", "instance" = "two"},

  forward_to = [prometheus.remote_write.grafanacloud.receiver, prometheus.remote_write.onprem.receiver]

  scrape_interval = "10s"
  params          = { "target" = [""], "module" = ["http_2xx"] }
  metrics_path    = "/probe"

Here’s the the endpoints that are being scraped every 10 seconds: