Grafana Alloy collector reference
Use this reference if you want to configure Grafana Alloy instances without using the Kubernetes Monitoring configuration GUI or if you want to modify Alloy instances you have deployed.
Collectors are Grafana Alloy instances deployed by the Alloy Operator as Kubernetes workloads. This information covers collector options specific to the Kubernetes Monitoring Helm chart.
When you define a collector, Alloy Operator creates a Kubernetes workload as either a DaemonSet, StatefulSet, or Deployment, with its own set of Pods running Alloy containers. Each collector uses a workload type determined by the presets you assign.
General configuration
Collectors are defined as a map in the values file of the Kubernetes Monitoring Helm chart. You choose the name for each collector and apply one or more presets that describe the deployment shape:
collectors:
metrics-collector: # You choose the name
presets: [clustered, statefulset] # Deployment shape
alloy: {} # Alloy container settings (resources, security context, …)
controller: {} # Workload settings (replicas, node selectors, …)
configReloader: {} # Config-reloader sidecar settings
logs-collector:
presets: [filesystem-log-reader, daemonset]
events-collector:
presets: [singleton]Features are assigned to a collector using the collector field. If you define only a single collector, all features use it automatically.
The following example shows the complete pattern. It defines three collectors: a metrics collector clustered and deployed as a StatefulSet, a logs collector deployed as a DaemonSet that reads log files from each node, and a receiver deployed as a DaemonSet for incoming application telemetry. Each feature references its collector by name.
collectors:
metrics-collector:
presets: [clustered, statefulset] # Deploys as a StatefulSet
logs-collector:
presets: [filesystem-log-reader, daemonset] # Deploys as a DaemonSet, one per node
receiver:
presets: [daemonset] # Deploys as a DaemonSet, one per node
clusterMetrics:
enabled: true
collector: metrics-collector # References the collector defined above
podLogsViaLoki:
enabled: true
collector: logs-collector # References the collector defined above
applicationObservability:
enabled: true
collector: receiver # References the collector defined aboveIf you want to apply the same Alloy settings to every collector (for example, resource limits or environment variables), use the collectorCommon section instead of repeating them in each collector definition:
collectorCommon:
alloy: {}Presets
Presets define the deployment shape and capabilities of a collector. You can combine multiple presets on a single collector, and their effects stack.
Typical collector configurations
The following examples show how to configure collectors for common use cases.
Metrics collector
Use a metrics collector for scraping cluster metrics, host metrics, cost metrics, targets discovered through Pod annotations, and targets defined by Prometheus Operator ServiceMonitors and PodMonitors.
collectors:
metrics-collector:
presets: [clustered, statefulset]Logs collector
Use a logs collector for gathering Pod logs and Node logs from the filesystem.
collectors:
logs-collector:
presets: [filesystem-log-reader, daemonset]Events collector
Use an events collector for gathering Cluster events and other data that must run as a single instance.
collectors:
events-collector:
presets: [singleton]Application receiver
Use an application receiver for receiving telemetry data from instrumented applications. It deploys one instance per node so applications can send to a local endpoint. This block defines the collector itself.
collectors:
receiver:
presets: [daemonset]The following block is a separate top-level key that configures the Application Observability feature. When enabled, it exposes OTLP gRPC and HTTP ports on the receiver so instrumented applications can send traces, metrics, and logs. Both blocks go in the same values file.
applicationObservability:
enabled: true
collector: receiver # References the receiver collector defined above
receivers:
otlp:
grpc:
enabled: true
port: 4317 # OTLP gRPC endpoint
http:
enabled: true
port: 4318 # OTLP HTTP endpointProfiles collector
Use a profiles collector for gathering profiles using eBPF, Java, or pprof profilers. The privileged preset runs the container as root with host PID access, which eBPF and Java profilers require to inspect processes on the node.
collectors:
profiles-collector:
presets: [privileged, daemonset]Client endpoint configuration
You can configure endpoints inside or outside the Cluster.
Inside the Cluster
Applications inside the Kubernetes Cluster use the kubedns name to reference a particular receiver endpoint. For example:
endpoint: http://grafana-k8s-monitoring-alloy[.mynamespace.cluster.local]:4318Outside the Cluster
To expose the receiver to applications outside the Cluster (for example, Frontend Observability), you can use different approaches depending on your setup. Load balancers are created by whatever controllers are installed on your Cluster. For the full list of options, refer to the Alloy chart values.
For example, to create a Network Load Balancer on Amazon Elastic Kubernetes Service (Amazon EKS) when using the AWS Load Balancer Controller, use this example:
collectors:
receiver:
presets: [daemonset]
alloy:
service:
type: LoadBalancerTo create an Application Load Balancer, use this example:
collectors:
receiver:
presets: [daemonset]
alloy:
ingress:
enabled: true
path: /
faroPort: 12347You can also create additional services and ingress objects as needed if the Alloy Helm chart options don’t fit your needs. Consult your Kubernetes vendor documentation for details.
Istio/Service Mesh
Depending on your mesh configuration, you might need to do either of these:
- Explicitly include the Grafana monitoring namespace as a member.
- Declare the receiver as a backend of your application for traffic within the Cluster.
For traffic from outside the Cluster, it’s likely you need to set up an ingress gateway into your mesh. In any case, consult your mesh vendor for details.
Troubleshooting
Here are some troubleshooting tips related to configuring collectors.
Startup issues
Make sure your collector Pods are up and running. Use this command to show you a list of Pods and associated states, replacing <namespace> with the Kubernetes namespace where you installed the Helm chart:
kubectl get pods -n <namespace>
While you may have meta monitoring turned on (which exposes the Alloy Pod logs in Loki), this is not helpful when the logs collector itself is faulty.
To troubleshoot collector startup problems, inspect the Pod logs using the method you would for any Kubernetes workload. Use the Pod name from the NAME column of kubectl get pods output (replace <pod-name> below). For example, to watch a logs collector:
kubectl logs -f --tail 100 <pod-name> -n <namespace>
Alloy debugger
You can apply standard Alloy troubleshooting strategies to each collector Pod specifically for Kubernetes.
To access the Alloy UI on a collector Pod, forward the UI port to your local machine:
kubectl port-forward <pod-name> 12345:12345Open your browser to
http://localhost:12345
Scaling
Follow these instructions for appropriate scaling.
DaemonSets and Singleton instances
For collectors deployed as DaemonSets (using the daemonset preset), one Pod is deployed per Node.
You cannot deploy more replicas with this type of controller.
For collectors with the singleton preset, only one Pod is deployed in the Cluster, and it must remain a single instance to avoid duplicate data.
To scale the individual Pods, increase the resource requests and limits. Refer to Estimate Grafana Alloy resource usage to learn how to tune those parameters.
For example, to increase the CPU and memory available to each Pod in a DaemonSet logs collector, set requests and limits under alloy.resources:
collectors:
logs-collector:
presets: [filesystem-log-reader, daemonset]
alloy:
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512MiStatefulSets
For StatefulSet collectors (using the statefulset preset), set the number of replicas. When combined with the clustered preset, Alloy automatically distributes scrape targets across all replicas.
collectors:
metrics-collector:
presets: [clustered, statefulset]
controller:
replicas: 3Autoscaling
Caution
Autoscalers can cause Cluster outages when not configured properly.
Alloy does not enable autoscaling by default, but allows for the configuration of either a Horizontal Pod Autoscaler (HPA) or a Vertical Pod Autoscaler (VPA).
To enable autoscaling for a collector, add the appropriate configuration to the controller section of the collector. You can use an HPA for horizontal scaling or a VPA for vertical scaling, and different collectors can use different strategies. For an HPA, minReplicas and maxReplicas set the floor and ceiling for the replica count, and targetCPUUtilizationPercentage sets the threshold that triggers a scale-up. For a VPA, the autoscaler adjusts CPU and memory requests automatically based on observed usage, and resourcePolicy constrains the ranges the VPA can set.
collectors:
metrics-collector:
presets: [clustered, statefulset]
controller:
autoscaling:
horizontal:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 80
logs-collector:
presets: [filesystem-log-reader, daemonset]
controller:
autoscaling:
vertical:
enabled: true
resourcePolicy:
containerPolicies:
- containerName: alloy
minAllowed:
cpu: 50m
memory: 64Mi
maxAllowed:
cpu: '2'
memory: 2GiValues reference
Collectors are user-defined, so all keys are relative to collectors.<name>. The same schema applies to every collector. For additional keys not listed here (such as alloy and controller sub-keys), refer to the generated collector values documentation.
General
Logging
Remote configuration
Remote configuration: authentication
Remote configuration: TLS
Additional configuration sources
Each collector has the ability to specify additional configuration sources within its definition:


