Onboard collectors deployed in Kubernetes to Fleet Management
Learn how to register your collectors in Kubernetes with Grafana Fleet Management. If you’d like to use Fleet Management for on-premises collectors, refer to the On premises instructions.
Grafana Kubernetes Monitoring Helm chart
From v2.0.0 of the Kubernetes Monitoring Helm chart, support for Fleet Management is built in. You can enable Fleet Management while configuring your clusters in Grafana Cloud.
In your Grafana Cloud stack, click Connections > Collector > Configure in the left-side menu.
Select Kubernetes from the platform dropdown.
Follow the instructions to identify your cluster and select features.
Enter a unique cluster name. The cluster name is used to create the
collector_id
, which identifies each Alloy instance and is generated as follows:- For Deployments:
<release-name>-<cluster name>-<namespace>-<pod name>
- For StatefulSets:
<release-name>-<cluster name>-<namespace>-<pod name>
- For DaemonSets:
<release-name>-<cluster name>-<namespace>-<workload>-<node name>
Make sure to use a unique name for each cluster to avoid
collector_id
collisions in the Fleet Management application.- For Deployments:
Generate a new token or include an existing one, which the application automatically adds to the manifest. The token should have the following scopes:
fleet-management:read
logs:write
metrics:write
metrics:read
traces:write
profiles:write
If you skip this step, make sure to add your access policy token wherever you see
REPLACE_WITH_ACCESS_POLICY_TOKEN
in the copied manifest. Without the token, your cluster cannot connect to Fleet Management.Before you copy the deployment code, make sure the Enable Remote Configuration switch is turned on.
Deploy the Kubernetes Monitoring Helm chart to your cluster.
Return to your Grafana Cloud stack and click Connections > Collector > Fleet Management to view your collector inventory.
Self-monitoring configuration pipelines
When you visit the Fleet Management interface in Grafana Cloud after registering a collector, a set of self-monitoring configuration pipelines are automatically created and assigned to registered collectors.
The internal telemetry collected by the self-monitoring pipelines powers the health dashboards and logs in the collector’s details view in the Fleet Management interface.
These pipelines, which begin with self_monitoring_*
, rely on environment variables to authenticate requests and set collector_id
labels that match telemetry to collectors.
- If you copy the Helm manifest from Grafana Cloud, the environment variables are set for you.
- If you opt to create your own manifest, you must set the environment variables
GCLOUD_RW_API_KEY
andGCLOUD_FM_COLLECTOR_ID
wherever the collector is running.
As of v2.1 of the Kubernetes Monitoring Helm chart, a self_monitoring_logs_kubernetes
pipeline is autogenerated the first time you register a collector and visit the Fleet Management interface.
If you registered collectors prior to the release of v2.1, you must delete all self_monitoring_*
pipelines from the Remote configuration tab in Fleet Management to generate the logs pipeline.
Once all autogenerated pipelines are deleted, Fleet Management recreates them, including the logs pipeline.
Inactive collectors
The collector_id
for singleton deployments includes a unique pod name that changes with each restart, resulting in a new collector instance in Fleet Management.
The old collectors become inactive and remain in your inventory until you delete them.
DaemonSets and StatefulSets persist their collector.id
on restarts, so inactive collectors don’t accumulate.
Versions
Fleet Management is not recommended for use with versions of the Kubernetes Monitoring Helm chart earlier than v2.0. If you have previously installed v1.x of the Kubernetes Monitoring Helm chart, you might need to remove legacy cluster roles and cluster role bindings for a clean upgrade.
Grafana Alloy Helm chart
You can also use the Alloy Helm chart to onboard and configure your collectors.
Create a secret you can use as an access token for Fleet Management.
kubectl create secret --namespace NAMESPACE generic gc-token --from-literal=token=VALUE
Create a ConfigMap from a file and add the following
remotecfg
block to the configuration.remotecfg { url = "<URL>" id = sys.env("GCLOUD_FM_COLLECTOR_ID") attributes = { "platform" = "kubernetes" } basic_auth { username = "<USERNAME>" password = sys.env("GCLOUD_RW_API_KEY") } }
Replace
<URL>
with the base URL of the Fleet Management service and<USERNAME>
with your instance ID, both of which can be found on the API tab in the Fleet Management interface.Update your Helm chart as follows.
gc: secret: name: gc-token alloy: configMap: create: false name: alloy-config key: config.alloy extraEnv: - name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: GCLOUD_FM_COLLECTOR_ID value: "clusterName-$(NAMESPACE)-$(POD_NAME)" - name: GCLOUD_RW_API_KEY valueFrom: secretKeyRef: name: gc-token key: token
In addition to specifying the Alloy configuration file, this chart also sets the following environment variables:
GCLOUD_FM_COLLECTOR_ID
is set to the unique nameclusterName-$(NAMESPACE)-$(POD_NAME)
, which should match theremotecfg
id
argument value. This variable is reset to a new value each time the pod restarts, which causes a new collector to appear in the Inventory tab. You can delete old, unused collectors from your inventory.GCLOUD_RW_API_KEY
is set to the secret you created in step 1.NAMESPACE
is set to the Kubernetes namespace of the running pod.POD_NAME
is set to the name of the running pod.HOSTNAME
is set to the name of the Kubernetes node hosting the pod.
These five variables must be set so that the self-monitoring configuration pipelines are properly assigned.
Deploy the Helm chart.
helm upgrade --install --namespace <NAMESPACE> <RELEASE_NAME> grafana/alloy -f <VALUES_PATH>
Replace the following:
<NAMESPACE>
: The namespace you used for your Alloy installation.<RELEASE_NAME>
: The name you used for your Alloy installation.<VALUES_PATH>
: The path to the values.yaml file.
Next steps
- Add attributes to your collectors for greater control over which configurations are applied and when.