Grafana Cloud

Grafana Alloy collector reference

Use this reference if you want to configure Grafana Alloy instances without using the Kubernetes Monitoring configuration GUI or if you want to modify Alloy instances you have deployed.

Collectors are Grafana Alloy instances deployed by the Alloy Operator as Kubernetes workloads. This information covers collector options specific to the Kubernetes Monitoring Helm chart.

When you define a collector, Alloy Operator creates a Kubernetes workload as either a DaemonSet, StatefulSet, or Deployment, with its own set of Pods running Alloy containers. Each collector uses a workload type determined by the presets you assign.

General configuration

Collectors are defined as a map in the values file of the Kubernetes Monitoring Helm chart. You choose the name for each collector and apply one or more presets that describe the deployment shape:

YAML
collectors:
  metrics-collector: # You choose the name
    presets: [clustered, statefulset] # Deployment shape
    alloy: {} # Alloy container settings (resources, security context, …)
    controller: {} # Workload settings (replicas, node selectors, …)
    configReloader: {} # Config-reloader sidecar settings
  logs-collector:
    presets: [filesystem-log-reader, daemonset]
  events-collector:
    presets: [singleton]

Features are assigned to a collector using the collector field. If you define only a single collector, all features use it automatically. The following example shows the complete pattern. It defines three collectors: a metrics collector clustered and deployed as a StatefulSet, a logs collector deployed as a DaemonSet that reads log files from each node, and a receiver deployed as a DaemonSet for incoming application telemetry. Each feature references its collector by name.

YAML
collectors:
  metrics-collector:
    presets: [clustered, statefulset] # Deploys as a StatefulSet
  logs-collector:
    presets: [filesystem-log-reader, daemonset] # Deploys as a DaemonSet, one per node
  receiver:
    presets: [daemonset] # Deploys as a DaemonSet, one per node

clusterMetrics:
  enabled: true
  collector: metrics-collector # References the collector defined above

podLogsViaLoki:
  enabled: true
  collector: logs-collector # References the collector defined above

applicationObservability:
  enabled: true
  collector: receiver # References the collector defined above

If you want to apply the same Alloy settings to every collector (for example, resource limits or environment variables), use the collectorCommon section instead of repeating them in each collector definition:

YAML
collectorCommon:
  alloy: {}

Presets

Presets define the deployment shape and capabilities of a collector. You can combine multiple presets on a single collector, and their effects stack.

PresetWhat it does
clusteredEnables Alloy clustering so replicas share scrape targets
statefulsetDeploys as a StatefulSet
daemonsetDeploys one instance per node
deploymentDeploys as a standard Deployment
singletonEnsures only a single replica runs
filesystem-log-readerMounts the node’s /var/log directory for reading container log files
privilegedRuns the container as root with host PID access (needed for eBPF and Java profilers)

Typical collector configurations

The following examples show how to configure collectors for common use cases.

Metrics collector

Use a metrics collector for scraping cluster metrics, host metrics, cost metrics, targets discovered through Pod annotations, and targets defined by Prometheus Operator ServiceMonitors and PodMonitors.

YAML
collectors:
  metrics-collector:
    presets: [clustered, statefulset]

Logs collector

Use a logs collector for gathering Pod logs and Node logs from the filesystem.

YAML
collectors:
  logs-collector:
    presets: [filesystem-log-reader, daemonset]

Events collector

Use an events collector for gathering Cluster events and other data that must run as a single instance.

YAML
collectors:
  events-collector:
    presets: [singleton]

Application receiver

Use an application receiver for receiving telemetry data from instrumented applications. It deploys one instance per node so applications can send to a local endpoint. This block defines the collector itself.

YAML
collectors:
  receiver:
    presets: [daemonset]

The following block is a separate top-level key that configures the Application Observability feature. When enabled, it exposes OTLP gRPC and HTTP ports on the receiver so instrumented applications can send traces, metrics, and logs. Both blocks go in the same values file.

YAML
applicationObservability:
  enabled: true
  collector: receiver # References the receiver collector defined above
  receivers:
    otlp:
      grpc:
        enabled: true
        port: 4317 # OTLP gRPC endpoint
      http:
        enabled: true
        port: 4318 # OTLP HTTP endpoint

Profiles collector

Use a profiles collector for gathering profiles using eBPF, Java, or pprof profilers. The privileged preset runs the container as root with host PID access, which eBPF and Java profilers require to inspect processes on the node.

YAML
collectors:
  profiles-collector:
    presets: [privileged, daemonset]

Client endpoint configuration

You can configure endpoints inside or outside the Cluster.

Inside the Cluster

Applications inside the Kubernetes Cluster use the kubedns name to reference a particular receiver endpoint. For example:

YAML
endpoint: http://grafana-k8s-monitoring-alloy[.mynamespace.cluster.local]:4318

Outside the Cluster

To expose the receiver to applications outside the Cluster (for example, Frontend Observability), you can use different approaches depending on your setup. Load balancers are created by whatever controllers are installed on your Cluster. For the full list of options, refer to the Alloy chart values.

For example, to create a Network Load Balancer on Amazon Elastic Kubernetes Service (Amazon EKS) when using the AWS Load Balancer Controller, use this example:

YAML
collectors:
  receiver:
    presets: [daemonset]
    alloy:
      service:
        type: LoadBalancer

To create an Application Load Balancer, use this example:

YAML
collectors:
  receiver:
    presets: [daemonset]
    alloy:
      ingress:
        enabled: true
        path: /
        faroPort: 12347

You can also create additional services and ingress objects as needed if the Alloy Helm chart options don’t fit your needs. Consult your Kubernetes vendor documentation for details.

Istio/Service Mesh

Depending on your mesh configuration, you might need to do either of these:

  • Explicitly include the Grafana monitoring namespace as a member.
  • Declare the receiver as a backend of your application for traffic within the Cluster.

For traffic from outside the Cluster, it’s likely you need to set up an ingress gateway into your mesh. In any case, consult your mesh vendor for details.

Troubleshooting

Here are some troubleshooting tips related to configuring collectors.

Startup issues

Make sure your collector Pods are up and running. Use this command to show you a list of Pods and associated states, replacing <namespace> with the Kubernetes namespace where you installed the Helm chart:

kubectl get pods -n <namespace>

While you may have meta monitoring turned on (which exposes the Alloy Pod logs in Loki), this is not helpful when the logs collector itself is faulty.

To troubleshoot collector startup problems, inspect the Pod logs using the method you would for any Kubernetes workload. Use the Pod name from the NAME column of kubectl get pods output (replace <pod-name> below). For example, to watch a logs collector:

kubectl logs -f --tail 100 <pod-name> -n <namespace>

Alloy debugger

You can apply standard Alloy troubleshooting strategies to each collector Pod specifically for Kubernetes.

  1. To access the Alloy UI on a collector Pod, forward the UI port to your local machine:

    Bash
    kubectl port-forward <pod-name> 12345:12345
  2. Open your browser to http://localhost:12345

Scaling

Follow these instructions for appropriate scaling.

DaemonSets and Singleton instances

For collectors deployed as DaemonSets (using the daemonset preset), one Pod is deployed per Node. You cannot deploy more replicas with this type of controller.

For collectors with the singleton preset, only one Pod is deployed in the Cluster, and it must remain a single instance to avoid duplicate data.

To scale the individual Pods, increase the resource requests and limits. Refer to Estimate Grafana Alloy resource usage to learn how to tune those parameters.

For example, to increase the CPU and memory available to each Pod in a DaemonSet logs collector, set requests and limits under alloy.resources:

YAML
collectors:
  logs-collector:
    presets: [filesystem-log-reader, daemonset]
    alloy:
      resources:
        requests:
          cpu: 100m
          memory: 128Mi
        limits:
          cpu: 500m
          memory: 512Mi

StatefulSets

For StatefulSet collectors (using the statefulset preset), set the number of replicas. When combined with the clustered preset, Alloy automatically distributes scrape targets across all replicas.

YAML
collectors:
  metrics-collector:
    presets: [clustered, statefulset]
    controller:
      replicas: 3

Autoscaling

Caution

Autoscalers can cause Cluster outages when not configured properly.

Alloy does not enable autoscaling by default, but allows for the configuration of either a Horizontal Pod Autoscaler (HPA) or a Vertical Pod Autoscaler (VPA).

To enable autoscaling for a collector, add the appropriate configuration to the controller section of the collector. You can use an HPA for horizontal scaling or a VPA for vertical scaling, and different collectors can use different strategies. For an HPA, minReplicas and maxReplicas set the floor and ceiling for the replica count, and targetCPUUtilizationPercentage sets the threshold that triggers a scale-up. For a VPA, the autoscaler adjusts CPU and memory requests automatically based on observed usage, and resourcePolicy constrains the ranges the VPA can set.

YAML
collectors:
  metrics-collector:
    presets: [clustered, statefulset]
    controller:
      autoscaling:
        horizontal:
          enabled: true
          minReplicas: 2
          maxReplicas: 10
          targetCPUUtilizationPercentage: 80

  logs-collector:
    presets: [filesystem-log-reader, daemonset]
    controller:
      autoscaling:
        vertical:
          enabled: true
          resourcePolicy:
            containerPolicies:
              - containerName: alloy
                minAllowed:
                  cpu: 50m
                  memory: 64Mi
                maxAllowed:
                  cpu: '2'
                  memory: 2Gi

Values reference

Collectors are user-defined, so all keys are relative to collectors.<name>. The same schema applies to every collector. For additional keys not listed here (such as alloy and controller sub-keys), refer to the generated collector values documentation.

General

KeyTypeDefaultDescription
presetslist[]The list of presets that set the deployment shape and capabilities. Multiple presets can be combined.
extraConfigstring""Extra Alloy configuration to be added to the configuration file.
includeDestinationslist[]Include configuration components for these destinations. Configuration is already added for destinations used by enabled features on this collector. Useful when referencing destinations in extraConfig.
annotationslist[]Annotations to add to the Alloy Custom Resource. Not added to the workload or Pod.
labelslist[]Labels to add to the Alloy Custom Resource. Not added to the workload or Pod.
liveDebugging.enabledboolfalseEnable live debugging for the Alloy instance. Requires stability level to be set to “experimental”.

Logging

KeyTypeDefaultDescription
logging.formatstring"logfmt"Format to use for writing Alloy log lines.
logging.levelstring"info"Level at which Alloy log lines should be written.

Remote configuration

KeyTypeDefaultDescription
remoteConfig.enabledboolfalseEnable fetching configuration from a remote config server.
remoteConfig.urlstring""The URL of the remote config server.
remoteConfig.urlFromstring""Raw config for accessing the URL. Lets you insert raw Alloy references so you can load the URL from any number of places, such as loading values from environment variables or config maps.
remoteConfig.pollFrequencystring"5m"The frequency at which to poll the remote config server for updates.
remoteConfig.extraAttributesobject{}Attributes to be added to this collector when requesting configuration.
remoteConfig.proxyURLstring""The proxy URL to use for the remote config server.
remoteConfig.proxyFromEnvironmentboolfalseUse the proxy URL indicated by environment variables.
remoteConfig.proxyConnectHeaderobject{}Specifies headers to send to proxies during CONNECT requests.
remoteConfig.noProxystring""Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying.

Remote configuration: authentication

KeyTypeDefaultDescription
remoteConfig.auth.typestring"none"The type of authentication to use for the remote config server.
remoteConfig.auth.usernamestring""The username to use for the remote config server.
remoteConfig.auth.usernameFromstring""Raw config for accessing the username.
remoteConfig.auth.usernameKeystring"username"The key for storing the username in the secret.
remoteConfig.auth.passwordstring""The password to use for the remote config server.
remoteConfig.auth.passwordFromstring""Raw config for accessing the password.
remoteConfig.auth.passwordKeystring"password"The key for storing the password in the secret.
remoteConfig.secret.createbooltrueWhether to create a secret for the remote config server.
remoteConfig.secret.embedboolfalseIf true, skip secret creation and embed the credentials directly into the configuration.
remoteConfig.secret.namestring""The name of the secret to create.
remoteConfig.secret.namespacestring""The namespace for the secret.

Remote configuration: TLS

KeyTypeDefaultDescription
remoteConfig.tls.castring""The CA certificate for the server (as a string).
remoteConfig.tls.caFilestring""The CA certificate for the server (as a path to a file).
remoteConfig.tls.caFromstring""Raw config for accessing the server CA certificate.
remoteConfig.tls.certstring""The client certificate for the server (as a string).
remoteConfig.tls.certFilestring""The client certificate for the server (as a path to a file).
remoteConfig.tls.certFromstring""Raw config for accessing the client certificate.
remoteConfig.tls.keystring""The client key for the server (as a string).
remoteConfig.tls.keyFilestring""The client key for the server (as a path to a file).
remoteConfig.tls.keyFromstring""Raw config for accessing the client key.
remoteConfig.tls.insecureSkipVerifyboolfalseDisables validation of the server certificate.

Additional configuration sources

Each collector has the ability to specify additional configuration sources within its definition:

NameAssociated valuesDescription
Extra configurationcollectors.<name>.extraConfigAdditional configuration to be added to the configuration file. Use this for adding custom configuration, but do not use it to modify existing configuration.
Remote configurationcollectors.<name>.remoteConfigConfiguration for fetching remotely defined configuration. To configure, refer to Grafana Fleet Management.
Loggingcollectors.<name>.loggingConfiguration for logging.
Live debuggingcollectors.<name>.liveDebuggingConfiguration for enabling the Alloy Live Debugging feature.
Common settingscollectorCommon.alloySettings that apply to all collectors.