Scrape and forward application metrics
You can scrape and forward metrics of an application on your Kubernetes cluster that is exporting metrics. To do so, extend the configuration in the Grafana Kubernetes Helm chart.
When you are adding a new configuration, it’s helpful to think of it in the following phases:
- Discovery: How should the collector find my service?
- Scraping: How should metrics get scraped from my service?
- Processing: Is there any work that needs to be done to these metrics?
- Delivery: Where should these metrics be sent?
Discovery
In the discovery phase, you must find the specific Pod or service you want to scrape for metrics. The Grafana Kubernetes Helm chart automatically creates three components that you can use:
discovery.kubernetes.nodes
: Discovers all nodes in the Cluster.discovery.kubernetes.pods
: Discovers all pods in the Cluster.discovery.kubernetes.services
: Discovers all services in the Cluster.
These are all discovery.kubernetes
components, which use the Kubernetes API to gather all the specific resources. Then you want to refine the search to only the service or Pod that you want.
Service discovery
Since you don’t want to scrape every service in your Cluster, you use rules to select your specific service based on its name, namespace, labels, port names, numbers, and many other variables. To do this:
- Use a
discovery.relabel
component. - Add one or more rules by using special meta-labels that are set automatically by the
discovery.kubernetes
component.
The following is an example component named “blue_database_service”. This component takes the list of all services from discovery.kubernetes.services
, and filters to a service named “database” in the namespace “blue” with the port named “metrics”:
discovery.relabel "blue_database_service" {
targets = discovery.kubernetes.services.targets // Gets all services
rule { // Keep all services named "database"...
source_labels = ["__meta_kubernetes_service_name"]
regex = "database"
action = "keep"
}
rule { // ... that exist in the "blue" namespace...
source_labels = ["__meta_kubernetes_namespace"]
regex = "blue"
action = "keep"
}
rule { // ... and only scrape its port named "metrics".
source_labels = ["__meta_kubernetes_service_port_name"]
regex = "metrics"
action = "keep"
}
}
The discovery.kubernetes documentation lists the meta labels for services.
Note: There are different labels for port name and port number. Make sure you use the correct label for a named port, or simply use the port number.
This is also a good point in the process to add any extra labels you want to scrape. For example, if you wanted to set the label team="blue"
, you might use this additional rule in the blue_database_service
component that was just made:
rule {
target_label = "team"
action = "replace"
replacement = "blue"
}
Pod discovery
Similar to service discovery, you can use a discovery.relabel
component to select the specific Pod or Pods that you want to scrape. The meta labels for Pods will be slightly different, but the concept is the same.
The following example filters to a specific set of Pods that begins with the name “analysis”, with the label “system.component=image”:
discovery.relabel "image_analysis_pods" {
targets = discovery.kubernetes.pods.targets // Gets all pods
rule { // Keep all pods named "analysis.*"...
source_labels = ["__meta_kubernetes_pod_name"]
regex = "analysis.*"
action = "keep"
}
rule { // ... with the label system.component=image
source_labels = ["__meta_kubernetes_pod_label_system_component"]
regex = "image"
action = "keep"
}
}
Note: There is a unique meta label for every Kubernetes label. The labels are prefixed with__meta_kubernetes_pod_label_
, and the label name is normalized so all non-alphanumeric characters become underscores (_
).
Scraping
To scrape the specific Pod or service you want for metrics, use the prometheus.scrape
component. You only need to declare which items to scrape and where to send the scraped metrics, as in the following example:
prometheus.scrape "processing_app" {
targets = discovery.relabel.image_analysis_pods.output
forward_to = [prometheus.remote_write.metrics_service.receiver]
}
(The forward_to
field is covered in the Delivery section that follows the Processing section.)
The [prometheus.scrape
] component offers many options to modify how to scrape the data, including:
- Setting the
job
label - How frequently to scrape the metrics
- The path to scrape
The following is an example showing several of these options:
prometheus.scrape "processing_app" {
targets = discovery.relabel.image_analysis_pods.output
job_name = "integrations/processing"
scrape_interval = "120s"
metrics_path = "/api/v1/metrics"
forward_to = [prometheus.remote_write.metrics_service.receiver]
}
Processing
You may often want to process the metrics after scraping. Some common reasons are:
- To limit the amount of metrics being sent to Prometheus
- To add, change labels, or drop labels
For processing, use the prometheus.relabel
component. This component uses the same type of rules as discovery.relabel
. However, instead of filtering scrape targets, it filters
the metrics that were scraped.
The following processing example filters the scraped metrics to drop all other metrics except:
up
- Anything that starts with
processor
prometheus.scrape "processing_app" {
targets = discovery.relabel.image_analysis_pods.output
forward_to = [prometheus.relabel.processing_app.receiver]
}
prometheus.relabel "processing_app" {
rule {
source_labels = ["__name__"]
regex = "up|processor.*"
action = "keep"
}
forward_to = [prometheus.remote_write.metrics_service.receiver]
}
Note: You must adjust the prometheus.scrape
component to forward to this component.
Delivery
The prometheus.scrape
and prometheus.relabel
components must send their outputs to another component, which is the
purpose of their forward_to
field. Forwarding can be to another prometheus.relabel
component. Eventually, the final
step is to send the metrics to a Prometheus server for storage, where it can be either:
- Further processed by recording rules
- Queried and displayed by Grafana
Use the prometheus.remote_write
component to send outputs.
The Grafana Kubernetes Helm chart automatically creates the component prometheus.remote_write.metrics_service
, configured by the .externalServices.prometheus
values. You can use this component to send your metrics to the same destination as the infrastructure metrics.
If you want to use a different destination, create a new prometheus.remote_write
component.
Include configuration in Helm chart
To include your configuration in the Grafana Kubernetes Helm chart, save the following into a file and pass it directly to the helm install
command:
$ ls
processor-config.river chart-values.yaml
$ cat processor_config.river
discovery.relabel "image_analysis_pods" {
targets = discovery.kubernetes.pods.targets // Gets all pods
rule { // Keep all pods named "analysis.*"...
source_labels = ["__meta_kubernetes_pod_name"]
regex = "analysis.*"
action = "keep"
}
rule { // ... with the label system.component=image
source_labels = ["__meta_kubernetes_pod_label_system_component"]
regex = "image"
action = "keep"
}
}
prometheus.scrape "processing_app" {
targets = discovery.relabel.image_analysis_pods.output
forward_to = [prometheus.relabel.processing_app.receiver]
}
prometheus.relabel "processing_app" {
rule {
source_labels = ["__name__"]
regex = "up|processor.*"
action = "keep"
}
forward_to = [prometheus.remote_write.metrics_service.receiver]
}
$ helm install k8s-monitoring grafana/k8s-monitoring --values chart-values.yaml --set-file extraConfig=processor-config.river
Was this page helpful?
Related resources from Grafana Labs


