---
title: "Streamline your workflows | Grafana Cloud documentation"
description: "Learn how to use Grafana Fleet Management to optimize for your bespoke observability setup"
---

# Streamline your workflows

Follow these tips and strategies for using Grafana Fleet Management to streamline your remote configuration workflows.

## Increase efficiency with reusable configuration pipelines

Save time and effort by reusing configuration pipelines across your fleet.

- Set environment variables for properties that vary and create a pipeline that reads those variables.
- Inject collector attributes into configuration pipelines to label telemetry.
- Use collector attributes to customize how pipelines are applied.

### Configuration properties with different values

Configuring a fleet of collectors can be cumbersome, especially when configuration contexts, such as host credentials or scrape targets, differ. To avoid creating multiple versions of the same pipeline, Fleet Management can help you efficiently reuse standard configuration pipelines across deployments, with the help of environment variables. Set variables where Alloy is running and use the Alloy configuration function, [`sys.env`](/docs/grafana-cloud/send-data/alloy/reference/stdlib/sys/#sysenv), to read those variables in the pipeline.

#### Example: Scrape different targets

For example, if you want to collect metrics on all deployments but you need to scrape different targets, set a target environment variable on each host, refer to the variable in the pipeline, and then roll out the pipeline to your collectors:

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
prometheus.scrape "example" {
  targets = [
    {"__address__" = sys.env("TARGET_LOCATION")},
  ]
  forward_to = [prometheus.remote_write.staging.receiver]
}
```

### Telemetry labels based on collector attributes

When you [assign attributes](/docs/grafana-cloud/send-data/fleet-management/set-up/onboard-collectors/standalone-installations/#add-remote-attributes) to a collector, you are categorizing them based on meaningful characteristics, such as team ownership or environment. You might also want to persist those attributes in the labels you apply to your collected data. Rather than creating a new pipeline for each set of labels you want to apply, you can reuse a single pipeline that [injects collector attributes](/docs/grafana-cloud/send-data/fleet-management/set-up/configuration-pipelines/pipeline-attribute-injection/) using the `argument.attributes.value["ATTRIBUTE_KEY"]` syntax.

#### Example: Add telemetry labels for owner and environment

For example, if you’ve assigned attributes to collectors for `owner` and `env` and want to export metrics with corresponding labels, relabel the telemetry in the pipeline and then roll it out to your collectors.

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
discovery.relabel "example" {
  targets = prometheus.exporter.self.example.targets

  rule {
    source_labels = ["__address__"]
    target_label  = "owner"
    replacement   = argument.attributes.value["owner"]
  }

  rule {
    source_labels = ["__address__"]
    target_label  = "env"
    replacement   = argument.attributes.value["env"]
  }
}
```

### Customized pipelines by application workload or technology

Match configuration pipelines to your collectors based on the applications or technologies that are running on the host. You can add attributes to the collector that match to each workload or technology. If you have standard telemetry that needs to be collected from all hosts, such as server or database monitoring, create a single default configuration pipeline that applies to all collectors on all machines with a universal matching attribute. If you remove an application from a host, remove the attribute from that host’s collector to disable the pipeline for that instance.

#### Example: Apply the Node.js integration to hosts running your application

Collect metrics from all hosts where your Node.js application is running.

1. Add the attribute `application=MYAPP` to each collector where the application is hosted. You can add a local attribute directly to the [`remotecfg` block](/docs/grafana-cloud/send-data/fleet-management/set-up/onboard-collectors/standalone-installations/#add-remotecfg-to-local-configurations) of the local configuration or add a remote attribute in the [Fleet Management application](/docs/grafana-cloud/send-data/fleet-management/set-up/onboard-collectors/standalone-installations/#add-remote-attributes) or with an [API request](/docs/grafana-cloud/send-data/fleet-management/api-reference/collector-api/).
2. Follow the instructions to create a new configuration pipeline using an [integration template](/docs/grafana-cloud/send-data/fleet-management/set-up/configuration-pipelines/integrations/) and select the [Node.js integration](/docs/grafana-cloud/monitor-infrastructure/integrations/integration-reference/integration-nodejs/).
3. Add the matching attribute `application=MYAPP` to the pipeline.
4. Activate the pipeline to begin collecting telemetry from your Node.js applications.

#### Example: Apply the MySQL integration to every collector

If all of your machines are running MySQL, use a default configuration pipeline to collect metrics and logs from them.

1. Add the attribute `default=MYSQL` to every collector by selecting all collectors in the Fleet Management application and clicking the [bulk edit tool](/docs/grafana-cloud/send-data/fleet-management/set-up/onboard-collectors/standalone-installations/#add-remote-attributes) or by making a [`BulkUpdateCollectorsRequest`](/docs/grafana-cloud/send-data/fleet-management/api-reference/collector-api/#bulkupdatecollectorsrequest) to the Collector API.
2. Follow the instructions to create a new configuration pipeline using an [integration template](/docs/grafana-cloud/send-data/fleet-management/set-up/configuration-pipelines/integrations/) and select the [MySQL integration](/docs/grafana-cloud/monitor-infrastructure/integrations/integration-reference/integration-mysql/).
3. Review the configuration and set up any parameters required. For MySQL, the default configuration expects the connection string to be available in a file at `/var/lib/alloy/mysql-secret`.
4. Add the matching attribute `default=MYSQL` to the pipeline.
5. Activate the pipeline to begin collecting telemetry from your MySQL databases.

## Minimize risk by deploying configuration pipelines in stages

Configuration pipelines are assigned to collectors using matching attributes. Ensure stability in your production environments with staged releases that leverage these attributes.

In the Fleet Management interface, begin by [assigning attributes](/docs/grafana-cloud/send-data/fleet-management/set-up/onboard-collectors/standalone-installations/#add-remote-attributes) to your collectors based on their deployment characteristics, such as `env=PROD`, `test=GROUP-A`, or `deploy=BLUE`. You can also automate the assignment of remote attributes with calls to the [Collector API](/docs/grafana-cloud/send-data/fleet-management/api-reference/collector-api/).

Once your collectors are categorized, create a configuration pipeline and assign matching attributes, either using the [Fleet Management application](/docs/grafana-cloud/send-data/fleet-management/set-up/configuration-pipelines/integrations/#assign-the-configuration-pipeline) or the [Pipeline API](/docs/grafana-cloud/send-data/fleet-management/api-reference/pipeline-api/). Matching attributes are combined with an `AND` operator, so you can customize their application to your setup.

### Example: Release a new configuration pipeline

For example, you can use a gradual rollout to test a new configuration pipeline:

1. Add an `env` remote attribute with values `dev`, `staging`, `prod-eu`, or `prod-us` to each collector that should receive the pipeline.
2. While creating the new pipeline, add a matching attribute using a regular expression that matches your `dev` collectors: `env=~dev`.
3. When you’re satisfied with the pipeline’s performance, modify the matching attribute to include the staging environment: `env=~dev|staging`.
4. If the pipeline is ready for production, add the `prod-eu` to the matching attribute: `env=~dev|staging|prod-eu`.
5. Confirm there are no issues and then add the `prod-us` to the matching attribute so the pipeline is now deployed across all your environments and production instances: `env=~dev|staging|prod-eu|prod-us`.

If the pipeline causes a problem at any point during the rollout, deactivate it with a click of the switch in the **Remote configuration** tab in the Fleet Management application.

### Example: Create a new version of a configuration pipeline

With the [Grafana Terraform provider](/docs/grafana-cloud/send-data/fleet-management/set-up/infrastructure-as-code/gitops/#terraform-and-gitops) or the [Pipeline API](/docs/grafana-cloud/send-data/fleet-management/set-up/infrastructure-as-code/gitops/#fleet-management-pipeline-api), it’s possible to integrate version controlled configuration pipelines with Fleet Management. If GitOps is not part of your current observability setup, you can still maintain a version history when testing and rolling out new versions of existing configuration pipelines.

1. Create a copy of the current pipeline and name the copy with its version number (for example, `integration_linux_node_metrics_v1_3`).
2. Keep the original pipeline running everywhere while you deploy the new version to `dev` or `staging` environments using matching attributes.
3. If you’re satisfied with the new version, add the matching attributes to deploy it to the rest of your environments.
4. Deactivate the original pipeline by clicking the switch in the Fleet Management application or setting `enabled` to `false` in your API request.

If at any point the new version of the pipeline causes problems, you can deactivate it and reactivate the original pipeline.

> Note
> 
> Fleet Management offers a built-in audit trail for tracking changes to configuration pipelines. Using the [**Pipelines history**](/docs/grafana-cloud/send-data/fleet-management/manage-fleet/pipelines/view-pipeline-history/) feature, you can review changes to configurations and attributes, plus restore versions of existing pipelines.

### Other configuration deployment patterns

Matching attributes can be used for other types of configuration pipeline deployments:

- **A/B testing**. To conduct A/B testing of different configuration pipelines, assign remote attributes to collectors based on which version of the pipeline they should receive (for example, `test=GROUP-A` and `test=GROUP-B`) and then add the matching attribute to the corresponding pipeline. [Label the collected telemetry](#telemetry-labels-based-on-collector-attributes) by group as well and then evaluate the performance of each pipeline.
- **Canary deployments**. Implement a canary deployment by assigning a meaningful attribute to each group of collectors and then matching the new version of the pipeline to each group in succession. Once you’re satisfied with the pipeline’s performance, add a regular expression and the matching attribute for the next group of collectors, and so on. Refer to the [staged rollout example](#minimize-risk-by-deploying-configuration-pipelines-in-stages) for a sample regular expression.
- **Blue-green deployments**. You can also assign attributes based on blue-green environments, for example `deploy=BLUE` and `deploy=GREEN`, and then add matching attributes to control which configuration pipelines are assigned to each environment.

## Reduce costs with data on demand

High verbosity telemetry, such as `info` and `debug` logs or continuous profiles, can become expensive if you’re collecting them all the time. But when there’s an incident, you might need these signals. With Fleet Management, you can create configuration pipelines to collect this telemetry but keep the pipelines disabled until you need them.

If you’re sampling traces from your collectors, you can also increase the sampling percentage remotely when it’s time to debug an issue.

### Example: Collect on-demand profiles to diagnose excessive resource usage

For example, you can use on-demand continuous profiles to find the cause of excessive resource usage:

1. Create a profiling configuration pipeline using the [Fleet Management application](/docs/grafana-cloud/send-data/fleet-management/set-up/configuration-pipelines/) or the [Pipeline API](/docs/grafana-cloud/send-data/fleet-management/api-reference/pipeline-api/), but leave the pipeline disabled by turning off the UI switch or setting the `enabled` key to `false` in the API call.
2. When you notice an issue with high resource consumption, enable the profiling pipeline and make sure to add matching attributes so it’s matching to the correct collectors.
3. After finding the offending code, disable the profiling pipeline, remove the matching attributes, and leave the pipeline ready for the next time you need to troubleshoot an incident.

### Example: Automate data collection during an incident response

Consider automating higher-resolution data collection as part of your incident response:

1. Create deactivated configuration pipelines that collect the “must gather” data specified in your runbooks. Leave them disabled.
2. When an incident is declared, the process triggers an API call that enables the necessary pipelines and sets matching attributes.
3. Closing the incident triggers another API call to disable the pipelines and remove matching attributes.

### Example: Debug an issue with more tracing data

Increase your sampling percentage to get more tracing data when you need to debug an issue.

1. Create a configuration pipeline that collects traces and samples 10% of them using [probabilistic sampling](/docs/grafana-cloud/send-data/alloy/reference/components/otelcol/otelcol.processor.probabilistic_sampler/#otelcolprocessorprobabilistic_sampler).
   
   Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```alloy
   otelcol.processor.probabilistic_sampler "default" {
       // Keep 10% of traces
     sampling_percentage = 10
   
     output {
       traces = [otelcol.processor.batch.default.input]
     }
   }
   ```
2. Match the pipeline to your collectors using attributes and activate it to begin receiving sampled traces.
3. If an issue occurs, return to the Fleet Management interface and edit the configuration pipeline to increase the sampling percentage to 100% so you can see all tracing data.
   
   Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```alloy
   otelcol.processor.probabilistic_sampler "default" {
       // Keep 100% of traces
     sampling_percentage = 100
   
     output {
       traces = [otelcol.processor.batch.default.input]
     }
   }
   ```
4. When the issue is resolved, edit the pipeline again to return the sampling percentage to 10%.

## Maximize security with scalable Alloy components

You can enforce the principle of least privilege and minimize attack radius by rotating credentials for your hosts with remote configuration in Fleet Management. Add the Alloy [`remote.vault`](/docs/grafana-cloud/send-data/alloy/reference/components/remote/remote.vault/) component to your configuration pipelines to retrieve and rotate secrets using the [Key/Value v2 secrets engine](https://developer.hashicorp.com/vault/docs/secrets/kv/kv-v2).
