Important: This documentation is about an older version. It's relevant only to the release noted, many of the features and functions have been updated or replaced. Please view the current version.
Collect Prometheus metrics
You can configure Alloy to collect Prometheus metrics and forward them to any Prometheus-compatible database.
This topic describes how to:
- Configure metrics delivery.
- Collect metrics from Kubernetes Pods.
Components used in this topic
Before you begin
- Ensure that you have basic familiarity with instrumenting applications with Prometheus.
- Have a set of Prometheus exports or applications exposing Prometheus metrics that you want to collect metrics from.
- Identify where you will write collected metrics. Metrics can be written to Prometheus or Prometheus-compatible endpoints such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics.
- Be familiar with the concept of Components in Alloy.
Configure metrics delivery
Before components can collect Prometheus metrics, you must have a component responsible for writing those metrics somewhere.
The prometheus.remote_write component is responsible for delivering Prometheus metrics to one or more Prometheus-compatible endpoints.
After a prometheus.remote_write
component is defined, you can use other Alloy components to forward metrics to it.
To configure a prometheus.remote_write
component for metrics delivery, complete the following steps:
Add the following
prometheus.remote_write
component to your configuration file.prometheus.remote_write "<LABEL>" { endpoint { url = "<PROMETHEUS_URL>" } }
Replace the following:
<LABEL>
: The label for the component, such asdefault
. The label you use must be unique across allprometheus.remote_write
components in the same configuration file.<PROMETHEUS_URL>
The full URL of the Prometheus-compatible endpoint where metrics are sent, such ashttps://prometheus-us-central1.grafana.net/api/v1/write
for Prometheus orhttps://mimir-us-central1.grafana.net/api/v1/push/
for Mimir. The endpoint URL depends on the database you use.
If your endpoint requires basic authentication, paste the following inside the
endpoint
block.basic_auth { username = "<USERNAME>" password = "<PASSWORD>" }
Replace the following:
<USERNAME>
: The basic authentication username.<PASSWORD>
: The basic authentication password or API key.
If you have more than one endpoint to write metrics to, repeat the
endpoint
block for additional endpoints.
The following example demonstrates configuring prometheus.remote_write
with multiple endpoints and mixed usage of basic authentication, and a prometheus.scrape
component which forwards metrics to it.
prometheus.remote_write "default" {
endpoint {
url = "http://localhost:9090/api/prom/push"
}
endpoint {
url = "https://prometheus-us-central1.grafana.net/api/prom/push"
// Get basic authentication based on environment variables.
basic_auth {
username = env("<REMOTE_WRITE_USERNAME>")
password = env("<REMOTE_WRITE_PASSWORD>")
}
}
}
prometheus.scrape "example" {
// Collect metrics from the default listen address.
targets = [{
__address__ = "127.0.0.1:12345",
}]
forward_to = [prometheus.remote_write.default.receiver]
}
For more information on configuring metrics delivery, refer to prometheus.remote_write.
Collect metrics from Kubernetes Pods
Alloy can be configured to collect metrics from Kubernetes Pods by:
- Discovering Kubernetes Pods to collect metrics from.
- Collecting metrics from those discovered Pods.
To collect metrics from Kubernetes Pods, complete the following steps:
Follow Configure metrics delivery to ensure collected metrics can be written somewhere.
Discover Kubernetes Pods:
Add the following
discovery.kubernetes
component to your configuration file to discover every Pod in the cluster across all Namespaces.discovery.kubernetes "<DISCOVERY_LABEL>" { role = "pod" }
Replace the following
<DISCOVERY_LABEL>
: The label for the component, such aspods
. The label you use must be unique across alldiscovery.kubernetes
components in the same configuration file.
This generates one Prometheus target for every exposed port on every discovered Pod.
To limit the Namespaces that Pods are discovered in, add the following block inside the
discovery.kubernetes
component.namespaces { own_namespace = true names = [<NAMESPACE_NAMES>] }
Replace the following:
<NAMESPACE_NAMES>
: A comma-delimited list of strings representing Namespaces to search. Each string must be wrapped in double quotes. For example,"default","kube-system"
.
If you don’t want to search for Pods in the Namespace Alloy is running in, set
own_namespace
tofalse
.To use a field selector to limit the number of discovered Pods, add the following block inside the
discovery.kubernetes
component.selectors { role = "pod" field = "<FIELD_SELECTOR>" }
Replace the following:
<FIELD_SELECTOR>
: The Kubernetes field selector to use, such asmetadata.name=my-service
. For more information on field selectors, refer to the Kubernetes documentation on Field Selectors.
Create additional
selectors
blocks for each field selector you want to apply.To use a label selector to limit the number of discovered Pods, add the following block inside the
discovery.kubernetes
component.selectors { role = "pod" label = "LABEL_SELECTOR" }
Replace the following:
<LABEL_SELECTOR>
: The Kubernetes label selector, such asenvironment in (production, qa)
. For more information on label selectors, refer to the Kubernetes documentation on Labels and Selectors.
Create additional
selectors
blocks for each label selector you want to apply.
Collect metrics from discovered Pods:
Add the following
prometheus.scrape
component to your configuration file.prometheus.scrape "<SCRAPE_LABEL>" { targets = discovery.kubernetes.<DISCOVERY_LABEL>.targets forward_to = [prometheus.remote_write.<REMOTE_WRITE_LABEL>.receiver] }
Replace the following:
<SCRAPE_LABEL>
: The label for the component, such aspods
. The label you use must be unique across allprometheus.scrape
components in the same configuration file.<DISCOVERY_LABEL>
: The label for thediscovery.kubernetes
component.<REMOTE_WRITE_LABEL>
: The label for your existingprometheus.remote_write
component.
The following example demonstrates configuring Alloy to collect metrics from running production Kubernetes Pods in the default
Namespace.
discovery.kubernetes "pods" {
role = "pod"
namespaces {
own_namespace = false
names = ["default"]
}
selectors {
role = "pod"
label = "environment in (production)"
}
}
prometheus.scrape "pods" {
targets = discovery.kubernetes.pods.targets
forward_to = [prometheus.remote_write.default.receiver]
}
prometheus.remote_write "default" {
endpoint {
url = "http://localhost:9090/api/prom/push"
}
}
For more information on configuring Kubernetes service delivery and collecting metrics, refer to discovery.kubernetes and prometheus.scrape.
Collect metrics from Kubernetes Services
You can configure Alloy to collect metrics from Kubernetes Services by:
- Discovering Kubernetes Services to collect metrics from.
- Collecting metrics from those discovered Services.
To collect metrics from Kubernetes Services, complete the following steps.
Follow Configure metrics delivery to ensure collected metrics can be written somewhere.
Discover Kubernetes Services:
Add the following
discovery.kubernetes
component to your configuration file to discover every Services in the cluster across all Namespaces.discovery.kubernetes "<DISCOVERY_LABEL>" { role = "service" }
Replace the following:
<DISCOVERY_LABEL>
: A label for the component, such asservices
. The label you use must be unique across alldiscovery.kubernetes
components in the same configuration file.
This will generate one Prometheus target for every exposed port on every discovered Service.
To limit the Namespaces that Services are discovered in, add the following block inside the
discovery.kubernetes
component.namespaces { own_namespace = true names = [<NAMESPACE_NAMES>] }
Replace the following:
<NAMESPACE_NAMES>
: A comma-delimited list of strings representing Namespaces to search. Each string must be wrapped in double quotes. For example,"default","kube-system"
.
If you don’t want to search for Services in the Namespace Alloy is running in, set
own_namespace
tofalse
.To use a field selector to limit the number of discovered Services, add the following block inside the
discovery.kubernetes
component.selectors { role = "service" field = "<FIELD_SELECTOR>" }
Replace the following:
<FIELD_SELECTOR>
: The Kubernetes field selector, such asmetadata.name=my-service
. For more information on field selectors, refer to the Kubernetes documentation on Field Selectors.
Create additional
selectors
blocks for each field selector you want to apply.To use a label selector to limit the number of discovered Services, add the following block inside the
discovery.kubernetes
component.selectors { role = "service" label = "<LABEL_SELECTOR>" }
Replace the following:
<LABEL_SELECTOR>
: The Kubernetes label selector, such asenvironment in (production, qa)
. For more information on label selectors, refer to the Kubernetes documentation on Labels and Selectors.
Create additional
selectors
blocks for each label selector you want to apply.
Collect metrics from discovered Services:
Add the following
prometheus.scrape
component to your configuration file.prometheus.scrape "<SCRAPE_LABEL>" { targets = discovery.kubernetes.<DISCOVERY_LABEL>.targets forward_to = [prometheus.remote_write.<REMOTE_WRITE_LABEL>.receiver] }
Replace the following:
<SCRAPE_LABEL>
: The label for the component, such asservices
. The label you use must be unique across allprometeus.scrape
components in the same configuration file.<DISCOVERY_LABEL>
: The label for thediscovery.kubernetes
component.<REMOTE_WRITE_LABEL>
: The label for your existingprometheus.remote_write
component.
The following example demonstrates configuring Alloy to collect metrics from running production Kubernetes Services in the default
Namespace.
discovery.kubernetes "services" {
role = "service"
namespaces {
own_namespace = false
names = ["default"]
}
selectors {
role = "service"
label = "environment in (production)"
}
}
prometheus.scrape "services" {
targets = discovery.kubernetes.services.targets
forward_to = [prometheus.remote_write.default.receiver]
}
prometheus.remote_write "default" {
endpoint {
url = "http://localhost:9090/api/prom/push"
}
}
For more information on configuring Kubernetes service delivery and collecting metrics, refer to discovery.kubernetes and prometheus.scrape.
Collect metrics from custom targets
You can configure Alloy to collect metrics from a custom set of targets without the need for service discovery.
To collect metrics from a custom set of targets, complete the following steps.
Follow Configure metrics delivery to ensure collected metrics can be written somewhere.
Add the following
prometheus.scrape
component to your configuration file:prometheus.scrape "<SCRAPE_LABEL>" { targets = [<TARGET_LIST>] forward_to = [prometheus.remote_write.<REMOTE_WRITE_LABEL>.receiver] }
Replace the following:
_
<SCRAPE_LABEL>
: The label for the component, such ascustom_targets
. The label you use must be unique across allprometheus.scrape
components in the same configuration file.<TARGET_LIST>
: A comma-delimited list of Objects denoting the Prometheus target. Each object must conform to the following rules:- There must be an
__address__
key denoting theHOST:PORT
of the target to collect metrics from. - To explicitly specify which protocol to use, set the
__scheme__
key to"http"
or"https"
. If the__scheme__
key isn’t provided, the protocol to use is inherited by the settings of theprometheus.scrape
component. The default is"http"
. - To explicitly specify which HTTP path to collect metrics from, set the
__metrics_path__
key to the HTTP path to use. If the__metrics_path__
key isn’t provided, the path to use is inherited by the settings of theprometheus.scrape
component. The default is"/metrics"
. - Add additional keys as desired to inject extra labels to collected metrics.
Any label starting with two underscores (
__
) will be dropped prior to scraping.
- There must be an
<REMOTE_WRITE_LABEL>
: The label for your existingprometheus.remote_write
component.
The following example demonstrates configuring prometheus.scrape
to collect metrics from a custom set of endpoints.
prometheus.scrape "custom_targets" {
targets = [
{
__address__ = "prometheus:9090",
},
{
__address__ = "mimir:8080",
__scheme__ = "https",
},
{
__address__ = "custom-application:80",
__metrics_path__ = "/custom-metrics–path",
},
{
__address__ = "alloy:12345",
application = "alloy",
environment = "production",
},
]
forward_to = [prometheus.remote_write.default.receiver]
}
prometheus.remote_write "default" {
endpoint {
url = "http://localhost:9090/api/prom/push"
}
}