Important: This documentation is about an older version. It's relevant only to the release noted, many of the features and functions have been updated or replaced. Please view the current version.
Sampling scrape targets
Applications often have many instances deployed. While Pyroscope is designed to handle large amounts of profiling data, you may want only a subset of the application’s instances to be scraped.
For example, the volume of profiling data your application generates may make it unreasonable to profile every instance, or you might be targeting cost-reduction.
Through configuration of Grafana Alloy (preferred) or Grafana Agent (legacy) collectors, Pyroscope can sample scrape targets.
Caution
Grafana Alloy is the new name for our distribution of the OTel collector. Grafana Agent has been deprecated and is in Long-Term Support (LTS) through October 31, 2025. Grafana Agent will reach an End-of-Life (EOL) on November 1, 2025. Read more about why we recommend migrating to Grafana Alloy.
Before you begin
Make sure you understand how to configure the collector to scrape targets and are familiar with the component configuration language. Alloy configuration files use the Alloy configuration syntax. Agent Flow files use the River language.
Configuration
The hashmod
action and the modulus
argument are used in conjunction to enable sampling behavior by sharding one or more labels. To read further on these concepts, refer to rule block documentation. In short, hashmod
performs an MD5 hash on the source labels and modulus
performs a modulus operation on the output.
The sample size can be modified by changing the value of modulus
in the hashmod
action and the regex
argument in the keep
action.
The modulus
value defines the number of shards, while the regex
value selects a subset of the shards.
Note
Choose your source label(s) for thehashmod
action carefully. They must uniquely define each scrape target orhashmod
won’t be able to shard the targets uniformly.
For example, consider an application deployed on Kubernetes with 100 pod replicas, all uniquely identified by the label pod_hash
.
The following configuration is set to sample 15% of the pods:
discovery.kubernetes "profile_pods" {
role = "pod"
}
discovery.relabel "profile_pods" {
targets = concat(discovery.kubernetes.profile_pods.targets)
// Other rule blocks ...
rule {
action = "hashmod"
source_labels = ["pod_hash"]
modulus = 100
target_label = "__tmp_hashmod"
}
rule {
action = "keep"
source_labels = ["__tmp_hashmod"]
regex = "^([0-9]|1[0-4])$"
}
// Other rule blocks ...
}
Considerations
This strategy doesn’t guarantee precise sampling. Due to its reliance on an MD5 hash, there isn’t a perfectly uniform distribution of scrape targets into shards. Larger numbers of scrape targets yield increasingly accurate sampling.
Keep in mind, if the label hashed is deterministic, you see deterministic sharding and thereby deterministic sampling of scrape targets. Similarly, if the label hashed is non-deterministic, you see scrape targets sampled in a non-deterministic fashion.