prometheus.operator.probes
prometheus.operator.probes
discovers Probe resources in your Kubernetes cluster and scrapes the targets they reference.
This component performs three main functions:
- Discover Probe resources from your Kubernetes cluster.
- Discover targets or ingresses that match those Probes.
- Scrape metrics from those endpoints, and forward them to a receiver.
The default configuration assumes Alloy is running inside a Kubernetes cluster, and uses the in-cluster config to access the Kubernetes API.
It can be run from outside the cluster by supplying connection info in the client
block, but network level access to pods is required to scrape metrics from them.
Probes may reference secrets for authenticating to targets to scrape them. In these cases, the secrets are loaded and refreshed only when the Probe is updated or when this component refreshes its’ internal state, which happens on a 5-minute refresh cycle.
Usage
prometheus.operator.probes "LABEL" {
forward_to = RECEIVER_LIST
}
Arguments
The following arguments are supported:
Blocks
The following blocks are supported inside the definition of prometheus.operator.probes
:
The >
symbol indicates deeper levels of nesting. For example, client > basic_auth
refers to a basic_auth
block defined
inside a client
block.
client block
The client
block configures the Kubernetes client used to discover Probes. If the client
block isn’t provided, the default in-cluster
configuration with the service account of the running Alloy pod is used.
The following arguments are supported:
At most, one of the following can be provided:
no_proxy
can contain IPs, CIDR notations, and domain names. IP and domain names can contain port numbers.
proxy_url
must be configured if no_proxy
is configured.
proxy_from_environment
uses the environment variables HTTP_PROXY, HTTPS_PROXY, and NO_PROXY (or the lowercase versions thereof).
Requests use the proxy from the environment variable matching their scheme, unless excluded by NO_PROXY.
proxy_url
and no_proxy
must not be configured if proxy_from_environment
is configured.
proxy_connect_header
should only be configured if proxy_url
or proxy_from_environment
are configured.
basic_auth block
password
and password_file
are mutually exclusive, and only one can be provided inside a basic_auth
block.
authorization block
credential
and credentials_file
are mutually exclusive, and only one can be provided inside an authorization
block.
oauth2 block
client_secret
and client_secret_file
are mutually exclusive, and only one can be provided inside an oauth2
block.
The oauth2
block may also contain a separate tls_config
sub-block.
no_proxy
can contain IPs, CIDR notations, and domain names. IP and domain names can contain port numbers.
proxy_url
must be configured if no_proxy
is configured.
proxy_from_environment
uses the environment variables HTTP_PROXY, HTTPS_PROXY, and NO_PROXY (or the lowercase versions thereof).
Requests use the proxy from the environment variable matching their scheme, unless excluded by NO_PROXY.
proxy_url
and no_proxy
must not be configured if proxy_from_environment
is configured.
proxy_connect_header
should only be configured if proxy_url
or proxy_from_environment
are configured.
tls_config block
The following pairs of arguments are mutually exclusive and can’t both be set simultaneously:
ca_pem
andca_file
cert_pem
andcert_file
key_pem
andkey_file
When configuring client authentication, both the client certificate (using cert_pem
or cert_file
) and the client key (using key_pem
or key_file
) must be provided.
When min_version
isn’t provided, the minimum acceptable TLS version is inherited from Go’s default minimum version, TLS 1.2.
If min_version
is provided, it must be set to one of the following strings:
"TLS10"
(TLS 1.0)"TLS11"
(TLS 1.1)"TLS12"
(TLS 1.2)"TLS13"
(TLS 1.3)
rule block
The rule
block contains the definition of any relabeling rules that can be applied to an input metric.
If more than one rule
block is defined, the transformations are applied in top-down order.
The following arguments can be used to configure a rule
.
All arguments are optional. Omitted fields take their default values.
You can use the following actions:
drop
- Drops metrics whereregex
matches the string extracted using thesource_labels
andseparator
.dropequal
- Drop targets for which the concatenatedsource_labels
do matchtarget_label
.hashmod
- Hashes the concatenated labels, calculates its modulomodulus
and writes the result to thetarget_label
.keep
- Keeps metrics whereregex
matches the string extracted using thesource_labels
andseparator
.keepequal
- Drop targets for which the concatenatedsource_labels
don’t matchtarget_label
.labeldrop
- Matchesregex
against all label names. Any labels that match are removed from the metric’s label set.labelkeep
- Matchesregex
against all label names. Any labels that don’t match are removed from the metric’s label set.labelmap
- Matchesregex
against all label names. Any labels that match are renamed according to the contents of thereplacement
field.lowercase
- Setstarget_label
to the lowercase form of the concatenatedsource_labels
.replace
- Matchesregex
to the concatenated labels. If there’s a match, it replaces the content of thetarget_label
using the contents of thereplacement
field.uppercase
- Setstarget_label
to the uppercase form of the concatenatedsource_labels
.
Note
The regular expression capture groups can be referred to using either the
$CAPTURE_GROUP_NUMBER
or${CAPTURE_GROUP_NUMBER}
notation.
scrape block
selector block
The selector
block describes a Kubernetes label selector for Probes.
The following arguments are supported:
When the match_labels
argument is empty, all Probe resources will be matched.
match_expression block
The match_expression
block describes a Kubernetes label matcher expression for
Probes discovery.
The following arguments are supported:
The operator
argument must be one of the following strings:
"In"
"NotIn"
"Exists"
"DoesNotExist"
If there are multiple match_expressions
blocks inside of a selector
block, they are combined together with AND clauses.
clustering block
When Alloy is running in clustered mode, and enabled
is set to true,
then this component instance opts-in to participating in
the cluster to distribute scrape load between all cluster nodes.
Clustering assumes that all cluster nodes are running with the same
configuration file, and that all
prometheus.operator.probes
components that have opted-in to using clustering, over
the course of a scrape interval have the same configuration.
All prometheus.operator.probes
components instances opting in to clustering use target
labels and a consistent hashing algorithm to determine ownership for each of
the targets between the cluster peers. Then, each peer only scrapes the subset
of targets that it is responsible for, so that the scrape load is distributed.
When a node joins or leaves the cluster, every peer recalculates ownership and
continues scraping with the new target set. This performs better than hashmod
sharding where all nodes have to be re-distributed, as only 1/N of the
target’s ownership is transferred, but is eventually consistent (rather than
fully consistent like hashmod sharding is).
If Alloy is not running in clustered mode, then the block is a no-op, and
prometheus.operator.probes
scrapes every target it receives in its arguments.
Exported fields
prometheus.operator.probes
does not export any fields. It forwards all metrics it scrapes to the receivers configured with the forward_to
argument.
Component health
prometheus.operator.probes
is reported as unhealthy when given an invalid configuration, Prometheus components fail to initialize, or the connection to the Kubernetes API could not be established properly.
Debug information
prometheus.operator.probes
reports the status of the last scrape for each configured
scrape job on the component’s debug endpoint, including discovered labels, and the last scrape time.
It also exposes some debug information for each Probe it has discovered, including any errors found while reconciling the scrape configuration from the Probe.
Debug metrics
prometheus.operator.probes
does not expose any component-specific debug metrics.
Example
This example discovers all Probes in your cluster, and forwards collected metrics to a prometheus.remote_write
component.
prometheus.remote_write "staging" {
// Send metrics to a locally running Mimir.
endpoint {
url = "http://mimir:9009/api/v1/push"
basic_auth {
username = "example-user"
password = "example-password"
}
}
}
prometheus.operator.probes "pods" {
forward_to = [prometheus.remote_write.staging.receiver]
}
This example will limit discovered Probes to ones with the label team=ops
in a specific namespace: my-app
.
prometheus.operator.probes "pods" {
forward_to = [prometheus.remote_write.staging.receiver]
namespaces = ["my-app"]
selector {
match_expression {
key = "team"
operator = "In"
values = ["ops"]
}
}
}
This example will apply additional relabel rules to discovered targets to filter by hostname. This may be useful if running Alloy as a DaemonSet.
prometheus.operator.probes "probes" {
forward_to = [prometheus.remote_write.staging.receiver]
rule {
action = "keep"
regex = sys.env("HOSTNAME")
source_labels = ["__meta_kubernetes_pod_node_name"]
}
}
Compatible components
prometheus.operator.probes
can accept arguments from the following components:
- Components that export Prometheus
MetricsReceiver
Note
Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details.