---
title: "Grafana Alloy collector reference | Grafana Cloud documentation"
description: "Details for configuring each Alloy collector instance"
---

# Grafana Alloy collector reference

Use this reference if you want to configure [Grafana Alloy](/docs/grafana-cloud/send-data/alloy/) instances without using the Kubernetes Monitoring configuration GUI or if you want to modify Alloy instances you have deployed.

Collectors are Grafana Alloy instances deployed by the Alloy Operator as Kubernetes workloads. This information covers collector options specific to the [Kubernetes Monitoring Helm chart](https://github.com/grafana/k8s-monitoring-helm/tree/main/charts/k8s-monitoring).

When you define a collector, Alloy Operator creates a Kubernetes workload as either a DaemonSet, StatefulSet, or Deployment, with its own set of Pods running Alloy containers. Each collector uses a workload type determined by the presets you assign.

## General configuration

Collectors are defined as a map in the [values file](https://github.com/grafana/k8s-monitoring-helm/blob/main/charts/k8s-monitoring/values.yaml) of the Kubernetes Monitoring Helm chart. You choose the name for each collector and apply one or more presets that describe the deployment shape:

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
collectors:
  metrics-collector: # You choose the name
    presets: [clustered, statefulset] # Deployment shape
    alloy: {} # Alloy container settings (resources, security context, …)
    controller: {} # Workload settings (replicas, node selectors, …)
    configReloader: {} # Config-reloader sidecar settings
  logs-collector:
    presets: [filesystem-log-reader, daemonset]
  events-collector:
    presets: [singleton]
```

Features are assigned to a collector using the `collector` field. If you define only a single collector, all features use it automatically. The following example shows the complete pattern. It defines three collectors: a metrics collector clustered and deployed as a StatefulSet, a logs collector deployed as a DaemonSet that reads log files from each node, and a receiver deployed as a DaemonSet for incoming application telemetry. Each feature references its collector by name.

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
collectors:
  metrics-collector:
    presets: [clustered, statefulset] # Deploys as a StatefulSet
  logs-collector:
    presets: [filesystem-log-reader, daemonset] # Deploys as a DaemonSet, one per node
  receiver:
    presets: [daemonset] # Deploys as a DaemonSet, one per node

clusterMetrics:
  enabled: true
  collector: metrics-collector # References the collector defined above

podLogsViaLoki:
  enabled: true
  collector: logs-collector # References the collector defined above

applicationObservability:
  enabled: true
  collector: receiver # References the collector defined above
```

If you want to apply the same Alloy settings to every collector (for example, resource limits or environment variables), use the `collectorCommon` section instead of repeating them in each collector definition:

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
collectorCommon:
  alloy: {}
```

## Presets

Presets define the deployment shape and capabilities of a collector. You can combine multiple presets on a single collector, and their effects stack.

Expand table

| Preset                  | What it does                                                                         |
|-------------------------|--------------------------------------------------------------------------------------|
| `clustered`             | Enables Alloy clustering so replicas share scrape targets                            |
| `statefulset`           | Deploys as a StatefulSet                                                             |
| `daemonset`             | Deploys one instance per node                                                        |
| `deployment`            | Deploys as a standard Deployment                                                     |
| `singleton`             | Ensures only a single replica runs                                                   |
| `filesystem-log-reader` | Mounts the node’s `/var/log` directory for reading container log files               |
| `privileged`            | Runs the container as root with host PID access (needed for eBPF and Java profilers) |

## Typical collector configurations

The following examples show how to configure collectors for common use cases.

### Metrics collector

Use a metrics collector for scraping cluster metrics, host metrics, cost metrics, targets discovered through Pod annotations, and targets defined by Prometheus Operator ServiceMonitors and PodMonitors.

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
collectors:
  metrics-collector:
    presets: [clustered, statefulset]
```

### Logs collector

Use a logs collector for gathering Pod logs and Node logs from the filesystem.

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
collectors:
  logs-collector:
    presets: [filesystem-log-reader, daemonset]
```

### Events collector

Use an events collector for gathering Cluster events and other data that must run as a single instance.

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
collectors:
  events-collector:
    presets: [singleton]
```

### Application receiver

Use an application receiver for receiving telemetry data from instrumented applications. It deploys one instance per node so applications can send to a local endpoint. This block defines the collector itself.

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
collectors:
  receiver:
    presets: [daemonset]
```

The following block is a separate top-level key that configures the [Application Observability](https://github.com/grafana/k8s-monitoring-helm/tree/main/charts/k8s-monitoring/charts/feature-application-observability) feature. When enabled, it exposes OTLP gRPC and HTTP ports on the receiver so instrumented applications can send traces, metrics, and logs. Both blocks go in the same values file.

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
applicationObservability:
  enabled: true
  collector: receiver # References the receiver collector defined above
  receivers:
    otlp:
      grpc:
        enabled: true
        port: 4317 # OTLP gRPC endpoint
      http:
        enabled: true
        port: 4318 # OTLP HTTP endpoint
```

### Profiles collector

Use a profiles collector for gathering profiles using eBPF, Java, or pprof profilers. The `privileged` preset runs the container as root with host PID access, which eBPF and Java profilers require to inspect processes on the node.

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
collectors:
  profiles-collector:
    presets: [privileged, daemonset]
```

### Client endpoint configuration

You can configure endpoints inside or outside the Cluster.

#### Inside the Cluster

Applications inside the Kubernetes Cluster use the [kubedns](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#namespaces-of-services) name to reference a particular receiver endpoint. For example:

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
endpoint: http://grafana-k8s-monitoring-alloy[.mynamespace.cluster.local]:4318
```

#### Outside the Cluster

To expose the receiver to applications outside the Cluster (for example, [Frontend Observability](/docs/grafana-cloud/monitor-applications/frontend-observability/)), you can use different approaches depending on your setup. Load balancers are created by whatever controllers are installed on your Cluster. For the full list of options, refer to the [Alloy chart values](https://raw.githubusercontent.com/grafana/alloy/main/operations/helm/charts/alloy/values.yaml).

For example, to create a [Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) on Amazon Elastic Kubernetes Service (Amazon EKS) when using the AWS Load Balancer Controller, use this example:

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
collectors:
  receiver:
    presets: [daemonset]
    alloy:
      service:
        type: LoadBalancer
```

To create an [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html), use this example:

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
collectors:
  receiver:
    presets: [daemonset]
    alloy:
      ingress:
        enabled: true
        path: /
        faroPort: 12347
```

You can also create additional services and ingress objects as needed if the Alloy Helm chart options don’t fit your needs. Consult your Kubernetes vendor documentation for details.

### Istio/Service Mesh

Depending on your mesh configuration, you might need to do either of these:

- Explicitly include the Grafana monitoring namespace as a member.
- Declare the receiver as a backend of your application for traffic within the Cluster.

For traffic from outside the Cluster, it’s likely you need to set up an ingress gateway into your mesh. In any case, consult your mesh vendor for details.

## Troubleshooting

Here are some troubleshooting tips related to configuring collectors.

### Startup issues

Make sure your collector Pods are up and running. Use this command to show you a list of Pods and associated states, replacing `<namespace>` with the Kubernetes namespace where you installed the Helm chart:

`kubectl get pods -n <namespace>`

While you may have meta monitoring turned on (which exposes the Alloy Pod logs in Loki), this is not helpful when the logs collector itself is faulty.

To troubleshoot collector startup problems, inspect the Pod logs [using the method](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_logs/) you would for any Kubernetes workload. Use the Pod name from the `NAME` column of `kubectl get pods` output (replace `<pod-name>` below). For example, to watch a logs collector:

`kubectl logs -f --tail 100 <pod-name> -n <namespace>`

### Alloy debugger

You can apply [standard Alloy troubleshooting strategies](/docs/grafana-cloud/send-data/alloy/troubleshoot/) to each collector Pod specifically for Kubernetes.

1. To access the Alloy UI on a collector Pod, forward the UI port to your local machine:
   
   Bash ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```bash
   kubectl port-forward <pod-name> 12345:12345
   ```
2. Open your browser to `http://localhost:12345`

## Scaling

Follow these instructions for appropriate scaling.

### DaemonSets and Singleton instances

For collectors deployed as DaemonSets (using the `daemonset` preset), one Pod is deployed per Node. You cannot deploy more replicas with this type of controller.

For collectors with the `singleton` preset, only one Pod is deployed in the Cluster, and it must remain a single instance to avoid duplicate data.

To scale the individual Pods, increase the resource requests and limits. Refer to [Estimate Grafana Alloy resource usage](/docs/alloy/latest/introduction/estimate-resource-usage/) to learn how to tune those parameters.

For example, to increase the CPU and memory available to each Pod in a DaemonSet logs collector, set `requests` and `limits` under `alloy.resources`:

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
collectors:
  logs-collector:
    presets: [filesystem-log-reader, daemonset]
    alloy:
      resources:
        requests:
          cpu: 100m
          memory: 128Mi
        limits:
          cpu: 500m
          memory: 512Mi
```

### StatefulSets

For StatefulSet collectors (using the `statefulset` preset), set the number of replicas. When combined with the `clustered` preset, Alloy automatically distributes scrape targets across all replicas.

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
collectors:
  metrics-collector:
    presets: [clustered, statefulset]
    controller:
      replicas: 3
```

### Autoscaling

> Caution
> 
> Autoscalers can cause Cluster outages when not configured properly.

Alloy does not enable autoscaling by default, but allows for the configuration of either a Horizontal Pod Autoscaler (HPA) or a Vertical Pod Autoscaler (VPA).

To enable autoscaling for a collector, add the appropriate configuration to the `controller` section of the collector. You can use an HPA for horizontal scaling or a VPA for vertical scaling, and different collectors can use different strategies. For an HPA, `minReplicas` and `maxReplicas` set the floor and ceiling for the replica count, and `targetCPUUtilizationPercentage` sets the threshold that triggers a scale-up. For a VPA, the autoscaler adjusts CPU and memory requests automatically based on observed usage, and `resourcePolicy` constrains the ranges the VPA can set.

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
collectors:
  metrics-collector:
    presets: [clustered, statefulset]
    controller:
      autoscaling:
        horizontal:
          enabled: true
          minReplicas: 2
          maxReplicas: 10
          targetCPUUtilizationPercentage: 80

  logs-collector:
    presets: [filesystem-log-reader, daemonset]
    controller:
      autoscaling:
        vertical:
          enabled: true
          resourcePolicy:
            containerPolicies:
              - containerName: alloy
                minAllowed:
                  cpu: 50m
                  memory: 64Mi
                maxAllowed:
                  cpu: '2'
                  memory: 2Gi
```

## Values reference

Collectors are user-defined, so all keys are relative to `collectors.<name>`. The same schema applies to every collector. For additional keys not listed here (such as `alloy` and `controller` sub-keys), refer to the [generated collector values documentation](https://github.com/grafana/k8s-monitoring-helm/tree/main/charts/k8s-monitoring/docs/collectors).

### General

Expand table

| Key                     | Type   | Default | Description                                                                                                                                                                                                 |
|-------------------------|--------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `presets`               | list   | `[]`    | The list of presets that set the deployment shape and capabilities. Multiple presets can be combined.                                                                                                       |
| `extraConfig`           | string | `""`    | Extra Alloy configuration to be added to the configuration file.                                                                                                                                            |
| `includeDestinations`   | list   | `[]`    | Include configuration components for these destinations. Configuration is already added for destinations used by enabled features on this collector. Useful when referencing destinations in `extraConfig`. |
| `annotations`           | list   | `[]`    | Annotations to add to the Alloy Custom Resource. Not added to the workload or Pod.                                                                                                                          |
| `labels`                | list   | `[]`    | Labels to add to the Alloy Custom Resource. Not added to the workload or Pod.                                                                                                                               |
| `liveDebugging.enabled` | bool   | `false` | Enable live debugging for the Alloy instance. Requires stability level to be set to “experimental”.                                                                                                         |

### Logging

Expand table

| Key              | Type   | Default    | Description                                       |
|------------------|--------|------------|---------------------------------------------------|
| `logging.format` | string | `"logfmt"` | Format to use for writing Alloy log lines.        |
| `logging.level`  | string | `"info"`   | Level at which Alloy log lines should be written. |

### Remote configuration

Expand table

| Key                                 | Type   | Default | Description                                                                                                                                                                                 |
|-------------------------------------|--------|---------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `remoteConfig.enabled`              | bool   | `false` | Enable fetching configuration from a remote config server.                                                                                                                                  |
| `remoteConfig.url`                  | string | `""`    | The URL of the remote config server.                                                                                                                                                        |
| `remoteConfig.urlFrom`              | string | `""`    | Raw config for accessing the URL. Lets you insert raw Alloy references so you can load the URL from any number of places, such as loading values from environment variables or config maps. |
| `remoteConfig.pollFrequency`        | string | `"5m"`  | The frequency at which to poll the remote config server for updates.                                                                                                                        |
| `remoteConfig.extraAttributes`      | object | `{}`    | Attributes to be added to this collector when requesting configuration.                                                                                                                     |
| `remoteConfig.proxyURL`             | string | `""`    | The proxy URL to use for the remote config server.                                                                                                                                          |
| `remoteConfig.proxyFromEnvironment` | bool   | `false` | Use the proxy URL indicated by environment variables.                                                                                                                                       |
| `remoteConfig.proxyConnectHeader`   | object | `{}`    | Specifies headers to send to proxies during CONNECT requests.                                                                                                                               |
| `remoteConfig.noProxy`              | string | `""`    | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying.                                                                                            |

### Remote configuration: authentication

Expand table

| Key                              | Type   | Default      | Description                                                                              |
|----------------------------------|--------|--------------|------------------------------------------------------------------------------------------|
| `remoteConfig.auth.type`         | string | `"none"`     | The type of authentication to use for the remote config server.                          |
| `remoteConfig.auth.username`     | string | `""`         | The username to use for the remote config server.                                        |
| `remoteConfig.auth.usernameFrom` | string | `""`         | Raw config for accessing the username.                                                   |
| `remoteConfig.auth.usernameKey`  | string | `"username"` | The key for storing the username in the secret.                                          |
| `remoteConfig.auth.password`     | string | `""`         | The password to use for the remote config server.                                        |
| `remoteConfig.auth.passwordFrom` | string | `""`         | Raw config for accessing the password.                                                   |
| `remoteConfig.auth.passwordKey`  | string | `"password"` | The key for storing the password in the secret.                                          |
| `remoteConfig.secret.create`     | bool   | `true`       | Whether to create a secret for the remote config server.                                 |
| `remoteConfig.secret.embed`      | bool   | `false`      | If true, skip secret creation and embed the credentials directly into the configuration. |
| `remoteConfig.secret.name`       | string | `""`         | The name of the secret to create.                                                        |
| `remoteConfig.secret.namespace`  | string | `""`         | The namespace for the secret.                                                            |

### Remote configuration: TLS

Expand table

| Key                                   | Type   | Default | Description                                                  |
|---------------------------------------|--------|---------|--------------------------------------------------------------|
| `remoteConfig.tls.ca`                 | string | `""`    | The CA certificate for the server (as a string).             |
| `remoteConfig.tls.caFile`             | string | `""`    | The CA certificate for the server (as a path to a file).     |
| `remoteConfig.tls.caFrom`             | string | `""`    | Raw config for accessing the server CA certificate.          |
| `remoteConfig.tls.cert`               | string | `""`    | The client certificate for the server (as a string).         |
| `remoteConfig.tls.certFile`           | string | `""`    | The client certificate for the server (as a path to a file). |
| `remoteConfig.tls.certFrom`           | string | `""`    | Raw config for accessing the client certificate.             |
| `remoteConfig.tls.key`                | string | `""`    | The client key for the server (as a string).                 |
| `remoteConfig.tls.keyFile`            | string | `""`    | The client key for the server (as a path to a file).         |
| `remoteConfig.tls.keyFrom`            | string | `""`    | Raw config for accessing the client key.                     |
| `remoteConfig.tls.insecureSkipVerify` | bool   | `false` | Disables validation of the server certificate.               |

## Additional configuration sources

Each collector has the ability to specify additional configuration sources within its definition:

Expand table

| Name                 | Associated values                 | Description                                                                                                                                                    |
|----------------------|-----------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Extra configuration  | `collectors.<name>.extraConfig`   | Additional configuration to be added to the configuration file. Use this for adding custom configuration, but do not use it to modify existing configuration.  |
| Remote configuration | `collectors.<name>.remoteConfig`  | Configuration for fetching remotely defined configuration. To configure, refer to [Grafana Fleet Management](/docs/grafana-cloud/send-data/fleet-management/). |
| Logging              | `collectors.<name>.logging`       | Configuration for [logging](/docs/grafana-cloud/send-data/alloy/reference/config-blocks/logging/).                                                             |
| Live debugging       | `collectors.<name>.liveDebugging` | Configuration for enabling the [Alloy Live Debugging feature](/docs/grafana-cloud/send-data/alloy/troubleshoot/debug/#live-debugging-page).                    |
| Common settings      | `collectorCommon.alloy`           | Settings that apply to all collectors.                                                                                                                         |
