Monitor Linux servers with Grafana Alloy
The Linux operating system generates a wide range of metrics and logs that you can use to monitor the health and performance of your hardware and operating system. With Alloy, you can collect your metrics and logs, forward them to a Grafana stack, and create dashboards to monitor your Linux servers.
This scenario demonstrates how to use Alloy to monitor Linux system metrics and logs using a complete example configuration. You’ll deploy a containerized monitoring stack that includes Alloy, Prometheus, Loki, and Grafana.
The alloy-scenarios repository contains complete examples of Alloy deployments.
Clone the repository and use the examples to understand how Alloy collects, processes, and exports telemetry signals.
Before you begin
Before you begin, ensure you have the following:
- Docker and Docker Compose installed
- Git for cloning the repository
- A Linux host or Linux running in a virtual machine
- Administrator privileges to run Docker commands
- Available ports: 3000 (Grafana), 9090 (Prometheus), 3100 (Loki), and 12345 (Alloy UI)
Clone and deploy the scenario
This scenario runs Alloy in a container alongside Grafana, Prometheus, and Loki, creating a self-contained monitoring stack. The Alloy container acts as a demonstration system to show monitoring capabilities.
In a production environment, you would typically install Alloy directly on each Linux server you want to monitor.
Follow these steps to clone the repository and deploy the monitoring scenario:
Clone the Alloy scenarios repository:
git clone https://github.com/grafana/alloy-scenarios.gitStart Docker to deploy the Grafana stack:
cd alloy-scenarios/linux docker compose up -dVerify the status of the Docker containers:
docker ps(Optional) Stop Docker to shut down the Grafana stack when you finish exploring this scenario:
docker compose down
Monitor and visualize your data
After deploying the monitoring stack, you can use the Alloy UI to monitor deployment health and Grafana to visualize your collected data.
Monitor Alloy
To monitor the health of your Alloy deployment, open your browser and go to http://localhost:12345.
For more information about the Alloy UI, refer to Debug Grafana Alloy.
Visualize your data
To explore metrics, open your browser and go to http://localhost:3000/explore/metrics.
To use the Grafana Logs Drilldown, open your browser and go to http://localhost:3000/a/grafana-lokiexplore-app.
To create a dashboard for visualizing metrics and logs:
- Open your browser and go to http://localhost:3000/dashboards.
- Download the JSON file for the preconfigured Linux node dashboard.
- Go to Dashboards > Import.
- Upload the JSON file.
- Select the Prometheus data source and click Import.
This community dashboard provides comprehensive system metrics including CPU, memory, disk, and network usage.
Understand the Alloy configuration
This scenario uses a config.alloy file to configure Alloy components for metrics and logging.
You can find this file in the cloned repository at alloy-scenarios/linux/.
The configuration demonstrates how to collect Linux system metrics and logs, then forward them to Prometheus and Loki for storage and visualization.
Configure metrics
The metrics configuration in this scenario requires four components that work together to collect, process, and forward system metrics. The components are configured in this order to create a data pipeline:
prometheus.exporter.unix- collects system metricsdiscovery.relabel- adds standard labels to metricsprometheus.scrape- scrapes metrics from the exporterprometheus.remote_write- sends metrics to Prometheus for storage
prometheus.exporter.unix
The
prometheus.exporter.unix component exposes hardware and Linux kernel metrics.
This component is the primary data source that collects system performance metrics from your Linux server.
The component configuration includes several important sections:
disable_collectors: Disables specific collectors to reduce unnecessary overheadenable_collectors: Enables thememinfocollector for memory informationfilesystem: Configures filesystem monitoring optionsnetclassandnetdev: Configure network interface monitoring
prometheus.exporter.unix "integrations_node_exporter" {
disable_collectors = ["ipvs", "btrfs", "infiniband", "xfs", "zfs"]
enable_collectors = ["meminfo"]
filesystem {
fs_types_exclude = "^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|tmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$"
mount_points_exclude = "^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+)($|/)"
mount_timeout = "5s"
}
netclass {
ignored_devices = "^(veth.*|cali.*|[a-f0-9]{15})$"
}
netdev {
device_exclude = "^(veth.*|cali.*|[a-f0-9]{15})$"
}
}This component provides the prometheus.exporter.unix.integrations_node_exporter.targets output that feeds into the discovery.relabel component.
discovery.relabel instance and job labels
The first
discovery.relabel component in this configuration replaces the instance and job labels from the node_exporter with standardized values.
This ensures consistent labeling across all metrics for easier querying and dashboard creation.
In this example, this component requires the following arguments:
targets: The targets to relabel.source_labels: The list of labels to select for relabeling. The rules extract the instance and job labels.replacement: The value that replaces the source label. The rules set the target labels toconstants.hostname, andintegrations/node_exporter.
discovery.relabel "integrations_node_exporter" {
targets = prometheus.exporter.unix.integrations_node_exporter.targets
rule {
target_label = "instance"
replacement = constants.hostname
}
rule {
target_label = "job"
replacement = "integrations/node_exporter"
}
}This component provides the discovery.relabel.integrations_node_exporter.output target list that feeds into the prometheus.scrape component.
discovery.relabel for systemd journal logs
This
discovery.relabel component defines the relabeling rules for the systemd journal logs.
In this example, this component requires the following arguments:
targets: The targets to relabel. No targets are modified, so thetargetsargument is an empty array.source_labels: The list of labels to select for relabeling. The rules extract the systemd unit, ID, transport, and log priority.target_label: The label written to the target. The rules set the target labels tounit,boot_id,transport, andlevel.
discovery.relabel "logs_integrations_integrations_node_exporter_journal_scrape" {
targets = []
rule {
source_labels = ["__journal__systemd_unit"]
target_label = "unit"
}
rule {
source_labels = ["__journal__boot_id"]
target_label = "boot_id"
}
rule {
source_labels = ["__journal__transport"]
target_label = "transport"
}
rule {
source_labels = ["__journal_priority_keyword"]
target_label = "level"
}
}This component provides the discovery.relabel.logs_integrations_integrations_node_exporter_journal_scrape.rules relabeling rules that feed into the loki.source.journal component.
prometheus.scrape
The
prometheus.scrape component scrapes node_exporter metrics and forwards them to a receiver.
This component consumes the labeled targets from the discovery.relabel.integrations_node_exporter.output.
In this example, the component requires the following arguments:
targets: The target to scrape metrics from. Use the targets with labels from thediscovery.relabelcomponent.forward_to: The destination to forward metrics to. Send the scraped metrics to the relabeling component.scrape_interval: The frequency of scraping the target.
prometheus.scrape "integrations_node_exporter" {
scrape_interval = "15s"
targets = discovery.relabel.integrations_node_exporter.output
forward_to = [prometheus.remote_write.local.receiver]
}This component provides scraped metrics that feed into the prometheus.remote_write.local.receiver for storage in Prometheus.
prometheus.remote_write
The
prometheus.remote_write component sends metrics to a Prometheus server.
In this example, the component requires the following argument:
url: Defines the full URL endpoint to send metrics to.
prometheus.remote_write "local" {
endpoint {
url = "http://prometheus:9090/api/v1/write"
}
}This component provides the prometheus.remote_write.local.receiver destination that receives metrics from the prometheus.scrape component.
Configure logging
The logging configuration in this scenario collects logs from both systemd journal and standard log files. This dual approach ensures comprehensive log coverage for most Linux systems. The configuration requires four main components that work together to discover, collect, and forward logs to Loki:
loki.source.journal- collects logs from systemd journallocal.file_match- discovers standard log files using glob patternsloki.source.file- reads logs from discovered filesloki.write- sends all collected logs to Loki for storage
loki.source.journal
The
loki.source.journal component collects logs from the systemd journal and forwards them to a Loki receiver.
This component consumes the relabeling rules from discovery.relabel.logs_integrations_integrations_node_exporter_journal_scrape.rules.
In this example, the component requires the following arguments:
max_age: Only collect logs from the last 24 hours.relabel_rules: Relabeling rules to apply on log entries.forward_to: Send logs to the local Loki instance.
loki.source.journal "logs_integrations_integrations_node_exporter_journal_scrape" {
max_age = "24h0m0s"
relabel_rules = discovery.relabel.logs_integrations_integrations_node_exporter_journal_scrape.rules
forward_to = [loki.write.local.receiver]
}This component provides systemd journal log entries that feed into the loki.write.local.receiver for storage in Loki.
local.file_match
The
local.file_match component discovers files on the local filesystem using glob patterns.
In this example, the component requires the following arguments:
path_targets: Targets to expand:__address__: Targets the localhost for log collection.__path__: Collect standard system logs.instance: Add an instance label with hostname.job: Add a job label for logs.
local.file_match "logs_integrations_integrations_node_exporter_direct_scrape" {
path_targets = [{
__address__ = "localhost",
__path__ = "/var/log/{syslog,messages,*.log}",
instance = constants.hostname,
job = "integrations/node_exporter",
}]
}This component provides the local.file_match.logs_integrations_integrations_node_exporter_direct_scrape.targets file list that feeds into the loki.source.file component.
loki.source.file
The
loki.source.file component reads log entries from files and forwards them to other Loki components.
This component consumes the file targets from local.file_match.logs_integrations_integrations_node_exporter_direct_scrape.targets.
In this example, the component requires the following arguments:
targets: The list of files to read logs from.forward_to: The list of receivers to send log entries to.
loki.source.file "logs_integrations_integrations_node_exporter_direct_scrape" {
targets = local.file_match.logs_integrations_integrations_node_exporter_direct_scrape.targets
forward_to = [loki.write.local.receiver]
}This component provides file-based log entries that feed into the loki.write.local.receiver for storage in Loki.
loki.write
The
loki.write component writes logs to a Loki destination.
In this example, the component requires the following argument:
url: Defines the full URL endpoint in Loki to send logs to.
loki.write "local" {
endpoint {
url = "http://loki:3100/loki/api/v1/push"
}
}This component provides the loki.write.local.receiver destination that receives log entries from both loki.source.journal and loki.source.file components.
Configure livedebugging
The livedebugging feature streams real-time data from your components directly to the Alloy UI.
This capability helps you troubleshoot configuration issues and monitor component behavior in real-time.
livedebugging
livedebugging is disabled by default.
Enable it explicitly through the livedebugging configuration block to make debugging data visible in the Alloy UI.
You can use an empty configuration for this block and Alloy uses the default values.
livedebugging{}For more information about using this feature for troubleshooting, refer to the Troubleshooting documentation.
Next steps
Now that you’ve successfully deployed and configured Alloy to monitor Linux systems, you can:
- Configure Alloy to collect metrics from applications
- Set up alerting rules in Grafana
- Explore advanced Alloy component configurations
- Deploy Alloy in production environments
- Monitor multiple Linux servers with a centralized configuration
For additional examples and configurations, refer to the alloy-scenarios repository.



