Menu

Caution

Grafana Alloy is the new name for our distribution of the OTel collector. Grafana Agent has been deprecated and is in Long-Term Support (LTS) through October 31, 2025. Grafana Agent will reach an End-of-Life (EOL) on November 1, 2025. Read more about why we recommend migrating to Grafana Alloy.

Important: This documentation is about an older version. It's relevant only to the release noted, many of the features and functions have been updated or replaced. Please view the current version.

Beta

prometheus.operator.podmonitors

BETA: This is a beta component. Beta components are subject to breaking changes, and may be replaced with equivalent functionality that cover the same use case.

prometheus.operator.podmonitors discovers PodMonitor resources in your kubernetes cluster and scrapes the targets they reference. This component performs three main functions:

  1. Discover PodMonitor resources from your Kubernetes cluster.
  2. Discover Pods in your cluster that match those PodMonitors.
  3. Scrape metrics from those Pods, and forward them to a receiver.

The default configuration assumes Grafana Agent Flow is running inside a Kubernetes cluster, and uses the in-cluster configuration to access the Kubernetes API. It can be run from outside the cluster by supplying connection info in the client block, but network level access to pods is required to scrape metrics from them.

PodMonitors may reference secrets for authenticating to targets to scrape them. In these cases, the secrets are loaded and refreshed only when the PodMonitor is updated or when this component refreshes its’ internal state, which happens on a 5-minute refresh cycle.

Usage

river
prometheus.operator.podmonitors "LABEL" {
    forward_to = RECEIVER_LIST
}

Arguments

The following arguments are supported:

NameTypeDescriptionDefaultRequired
forward_tolist(MetricsReceiver)List of receivers to send scraped metrics to.yes
namespaceslist(string)List of namespaces to search for PodMonitor resources. If not specified, all namespaces will be searched.no

Blocks

The following blocks are supported inside the definition of prometheus.operator.podmonitors:

HierarchyBlockDescriptionRequired
clientclientConfigures Kubernetes client used to find PodMonitors.no
client > basic_authbasic_authConfigure basic authentication to the Kubernetes API.no
client > authorizationauthorizationConfigure generic authorization to the Kubernetes API.no
client > oauth2oauth2Configure OAuth2 for authenticating to the Kubernetes API.no
client > oauth2 > tls_configtls_configConfigure TLS settings for connecting to the Kubernetes API.no
client > tls_configtls_configConfigure TLS settings for connecting to the Kubernetes API.no
ruleruleRelabeling rules to apply to discovered targets.no
scrapescrapeDefault scrape configuration to apply to discovered targets.no
selectorselectorLabel selector for which PodMonitors to discover.no
selector > match_expressionmatch_expressionLabel selector expression for which PodMonitors to discover.no
clusteringclusteringConfigure the component for when Grafana Agent is running in clustered mode.no

The > symbol indicates deeper levels of nesting. For example, client > basic_auth refers to a basic_auth block defined inside a client block.

client block

The client block configures the Kubernetes client used to discover PodMonitors. If the client block isn’t provided, the default in-cluster configuration with the service account of the running Grafana Agent pod is used.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
api_serverstringURL of the Kubernetes API server.no
kubeconfig_filestringPath of the kubeconfig file to use for connecting to Kubernetes.no
bearer_token_filestringFile containing a bearer token to authenticate with.no
bearer_tokensecretBearer token to authenticate with.no
enable_http2boolWhether HTTP2 is supported for requests.trueno
follow_redirectsboolWhether redirects returned by the server should be followed.trueno
proxy_urlstringHTTP proxy to send requests through.no
no_proxystringComma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying.no
proxy_from_environmentboolUse the proxy URL indicated by environment variables.falseno
proxy_connect_headermap(list(secret))Specifies headers to send to proxies during CONNECT requests.no

At most, one of the following can be provided:

no_proxy can contain IPs, CIDR notations, and domain names. IP and domain names can contain port numbers. proxy_url must be configured if no_proxy is configured.

proxy_from_environment uses the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY (or the lowercase versions thereof). Requests use the proxy from the environment variable matching their scheme, unless excluded by NO_PROXY. proxy_url and no_proxy must not be configured if proxy_from_environment is configured.

proxy_connect_header should only be configured if proxy_url or proxy_from_environment are configured.

basic_auth block

NameTypeDescriptionDefaultRequired
password_filestringFile containing the basic auth password.no
passwordsecretBasic auth password.no
usernamestringBasic auth username.no

password and password_file are mutually exclusive, and only one can be provided inside a basic_auth block.

authorization block

NameTypeDescriptionDefaultRequired
credentials_filestringFile containing the secret value.no
credentialssecretSecret value.no
typestringAuthorization type, for example, “Bearer”.no

credential and credentials_file are mutually exclusive, and only one can be provided inside an authorization block.

oauth2 block

NameTypeDescriptionDefaultRequired
client_idstringOAuth2 client ID.no
client_secret_filestringFile containing the OAuth2 client secret.no
client_secretsecretOAuth2 client secret.no
endpoint_paramsmap(string)Optional parameters to append to the token URL.no
proxy_urlstringHTTP proxy to send requests through.no
no_proxystringComma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying.no
proxy_from_environmentboolUse the proxy URL indicated by environment variables.falseno
proxy_connect_headermap(list(secret))Specifies headers to send to proxies during CONNECT requests.no
scopeslist(string)List of scopes to authenticate with.no
token_urlstringURL to fetch the token from.no

client_secret and client_secret_file are mutually exclusive, and only one can be provided inside an oauth2 block.

The oauth2 block may also contain a separate tls_config sub-block.

no_proxy can contain IPs, CIDR notations, and domain names. IP and domain names can contain port numbers. proxy_url must be configured if no_proxy is configured.

proxy_from_environment uses the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY (or the lowercase versions thereof). Requests use the proxy from the environment variable matching their scheme, unless excluded by NO_PROXY. proxy_url and no_proxy must not be configured if proxy_from_environment is configured.

proxy_connect_header should only be configured if proxy_url or proxy_from_environment are configured.

tls_config block

NameTypeDescriptionDefaultRequired
ca_pemstringCA PEM-encoded text to validate the server with.no
ca_filestringCA certificate to validate the server with.no
cert_pemstringCertificate PEM-encoded text for client authentication.no
cert_filestringCertificate file for client authentication.no
insecure_skip_verifyboolDisables validation of the server certificate.no
key_filestringKey file for client authentication.no
key_pemsecretKey PEM-encoded text for client authentication.no
min_versionstringMinimum acceptable TLS version.no
server_namestringServerName extension to indicate the name of the server.no

The following pairs of arguments are mutually exclusive and can’t both be set simultaneously:

  • ca_pem and ca_file
  • cert_pem and cert_file
  • key_pem and key_file

When configuring client authentication, both the client certificate (using cert_pem or cert_file) and the client key (using key_pem or key_file) must be provided.

When min_version is not provided, the minimum acceptable TLS version is inherited from Go’s default minimum version, TLS 1.2. If min_version is provided, it must be set to one of the following strings:

  • "TLS10" (TLS 1.0)
  • "TLS11" (TLS 1.1)
  • "TLS12" (TLS 1.2)
  • "TLS13" (TLS 1.3)

rule block

The rule block contains the definition of any relabeling rules that can be applied to an input metric. If more than one rule block is defined, the transformations are applied in top-down order.

The following arguments can be used to configure a rule. All arguments are optional. Omitted fields take their default values.

NameTypeDescriptionDefaultRequired
actionstringThe relabeling action to perform.replaceno
modulusuintA positive integer used to calculate the modulus of the hashed source label values.no
regexstringA valid RE2 expression with support for parenthesized capture groups. Used to match the extracted value from the combination of the source_label and separator fields or filter labels during the labelkeep/labeldrop/labelmap actions.(.*)no
replacementstringThe value against which a regular expression replace is performed, if the regular expression matches the extracted value. Supports previously captured groups."$1"no
separatorstringThe separator used to concatenate the values present in source_labels.;no
source_labelslist(string)The list of labels whose values are to be selected. Their content is concatenated using the separator and matched against regex.no
target_labelstringLabel to which the resulting value will be written to.no

You can use the following actions:

  • drop - Drops metrics where regex matches the string extracted using the source_labels and separator.
  • dropequal - Drop targets for which the concatenated source_labels do match target_label.
  • hashmod - Hashes the concatenated labels, calculates its modulo modulus and writes the result to the target_label.
  • keep - Keeps metrics where regex matches the string extracted using the source_labels and separator.
  • keepequal - Drop targets for which the concatenated source_labels do not match target_label.
  • labeldrop - Matches regex against all label names. Any labels that match are removed from the metric’s label set.
  • labelkeep - Matches regex against all label names. Any labels that don’t match are removed from the metric’s label set.
  • labelmap - Matches regex against all label names. Any labels that match are renamed according to the contents of the replacement field.
  • lowercase - Sets target_label to the lowercase form of the concatenated source_labels.
  • replace - Matches regex to the concatenated labels. If there’s a match, it replaces the content of the target_label using the contents of the replacement field.
  • uppercase - Sets target_label to the uppercase form of the concatenated source_labels.

Note

The regular expression capture groups can be referred to using either the $CAPTURE_GROUP_NUMBER or ${CAPTURE_GROUP_NUMBER} notation.

scrape block

NameTypeDescriptionDefaultRequired
default_scrape_intervaldurationThe default interval between scraping targets. Used as the default if the target resource doesn’t provide a scrape interval.1mno
default_scrape_timeoutdurationThe default timeout for scrape requests. Used as the default if the target resource doesn’t provide a scrape timeout.10sno

selector block

The selector block describes a Kubernetes label selector for PodMonitors.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
match_labelsmap(string)Label keys and values used to discover resources.{}no

When the match_labels argument is empty, all PodMonitor resources will be matched.

match_expression block

The match_expression block describes a Kubernetes label matcher expression for PodMonitors discovery.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
keystringThe label name to match against.yes
operatorstringThe operator to use when matching.yes
valueslist(string)The values used when matching.no

The operator argument must be one of the following strings:

  • "In"
  • "NotIn"
  • "Exists"
  • "DoesNotExist"

If there are multiple match_expressions blocks inside of a selector block, they are combined together with AND clauses.

clustering (beta)

NameTypeDescriptionDefaultRequired
enabledboolEnables sharing targets with other cluster nodes.falseyes

When Grafana Agent is using clustering, and enabled is set to true, then this component instance opts-in to participating in the cluster to distribute scrape load between all cluster nodes.

Clustering assumes that all cluster nodes are running with the same configuration file, and that all prometheus.operator.podmonitors components that have opted-in to using clustering, over the course of a scrape interval have the same configuration.

All prometheus.operator.podmonitors components instances opting in to clustering use target labels and a consistent hashing algorithm to determine ownership for each of the targets between the cluster peers. Then, each peer only scrapes the subset of targets that it is responsible for, so that the scrape load is distributed. When a node joins or leaves the cluster, every peer recalculates ownership and continues scraping with the new target set. This performs better than hashmod sharding where all nodes have to be re-distributed, as only 1/N of the target’s ownership is transferred, but is eventually consistent (rather than fully consistent like hashmod sharding is).

If Grafana Agent is not running in clustered mode, then the block is a no-op, and prometheus.operator.podmonitors scrapes every target it receives in its arguments.

Exported fields

prometheus.operator.podmonitors does not export any fields. It forwards all metrics it scrapes to the receiver configures with the forward_to argument.

Component health

prometheus.operator.podmonitors is reported as unhealthy when given an invalid configuration, Prometheus components fail to initialize, or the connection to the Kubernetes API could not be established properly.

Debug information

prometheus.operator.podmonitors reports the status of the last scrape for each configured scrape job on the component’s debug endpoint, including discovered labels, and the last scrape time.

It also exposes some debug information for each PodMonitor it has discovered, including any errors found while reconciling the scrape configuration from the PodMonitor.

Debug metrics

prometheus.operator.podmonitors does not expose any component-specific debug metrics.

Example

This example discovers all PodMonitors in your cluster, and forwards collected metrics to a prometheus.remote_write component.

river
prometheus.remote_write "staging" {
  // Send metrics to a locally running Mimir.
  endpoint {
    url = "http://mimir:9009/api/v1/push"

    basic_auth {
      username = "example-user"
      password = "example-password"
    }
  }
}

prometheus.operator.podmonitors "pods" {
    forward_to = [prometheus.remote_write.staging.receiver]
}

This example will limit discovered PodMonitors to ones with the label team=ops in a specific namespace: my-app.

river
prometheus.operator.podmonitors "pods" {
    forward_to = [prometheus.remote_write.staging.receiver]
    namespaces = ["my-app"]
    selector {
        match_expression {
            key = "team"
            operator = "In"
            values = ["ops"]
        }
    }
}

This example will apply additional relabel rules to discovered targets to filter by hostname. This may be useful if running Grafana Agent as a DaemonSet.

river
prometheus.operator.podmonitors "pods" {
    forward_to = [prometheus.remote_write.staging.receiver]
    rule {
      action = "keep"
      regex = env("HOSTNAME")
      source_labels = ["__meta_kubernetes_pod_node_name"]
    }
}

Compatible components

prometheus.operator.podmonitors can accept arguments from the following components:

Note

Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details.