This is documentation for the next version of Alloy. For the latest stable release, go to the latest version.
Get started with Alloy
This tutorial shows you how to configure Alloy to collect logs from your local machine, filter non-essential log lines, and send them to Loki, running in a local Grafana stack.
This process allows you to query and visualize the logs sent to Loki using the Grafana dashboard.
To follow this tutorial, you must have a basic understanding of Alloy and telemetry collection in general. You should also be familiar with Prometheus and PromQL, Loki and LogQL, and basic Grafana navigation. You don’t need to know about the Alloy configuration syntax concepts.
Prerequisites
This tutorial requires a Linux or macOS environment with Docker installed.
Install Alloy and start the service
Linux
Install and run Alloy on Linux.
macOS
Install and run Alloy on macOS.
Set up a local Grafana instance
To allow Alloy to write data to Loki running in the local Grafana stack, you can use the following Docker Compose file to set up a local Grafana instance alongside Loki and Prometheus, which are pre-configured as data sources.
version: '3'
services:
loki:
image: grafana/loki:3.0.0
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
prometheus:
image: prom/prometheus:v2.47.0
command:
- --web.enable-remote-write-receiver
- --config.file=/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
grafana:
environment:
- GF_PATHS_PROVISIONING=/etc/grafana/provisioning
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
entrypoint:
- sh
- -euc
- |
mkdir -p /etc/grafana/provisioning/datasources
cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml
apiVersion: 1
datasources:
- name: Loki
type: loki
access: proxy
orgId: 1
url: http://loki:3100
basicAuth: false
isDefault: false
version: 1
editable: false
- name: Prometheus
type: prometheus
orgId: 1
url: http://prometheus:9090
basicAuth: false
isDefault: true
version: 1
editable: false
EOF
/run.sh
image: grafana/grafana:11.0.0
ports:
- "3000:3000"
Run docker compose up
to start your Docker container and open http://localhost:3000 in your browser to view the Grafana UI.
Note
If you the following error when you start your Docker container,docker: 'compose' is not a docker command
, use the commanddocker-compose up
to start your Docker container.
Configure Alloy
Once the local Grafana instance is set up, the next step is to configure Alloy.
You use components in the config.alloy
file to tell Alloy which logs you want to scrape, how you want to process that data, and where you want the data sent.
The examples run on a single host so that you can run them on your laptop or in a Virtual Machine.
You can try the examples using a config.alloy
file and experiment with the examples yourself.
For the following steps, create a file called config.alloy
in your current working directory.
If you have enabled the Alloy UI, you can “hot reload” a configuration from a file.
In a later step, you copy this file to where Alloy picks it up, and reloads without restarting the system service.
First component: Log files
Paste this component into the top of the config.alloy
file:
local.file_match "local_files" {
path_targets = [{"__path__" = "/var/log/*.log"}]
sync_period = "5s"
}
This component creates a local.file_match component named local_files
with an attribute that tells Alloy which files to source, and to check for new files every 5 seconds.
Second component: Scraping
Paste this component next in the config.alloy
file:
loki.source.file "log_scrape" {
targets = local.file_match.local_files.targets
forward_to = [loki.process.filter_logs.receiver]
tail_from_end = true
}
This configuration creates a loki.source.file component named log_scrape
, and shows the pipeline concept of Alloy in action. The log_scrape
component does the following:
- It connects to the
local_files
component as its “source” or target. - It forwards the logs it scrapes to the receiver of another component called
filter_logs
. - It provides extra attributes and options to tail the log files from the end so you don’t ingest the entire log file history.
Third component: Filter non-essential logs
Filtering non-essential logs before sending them to a data source can help you manage log volumes to reduce costs. The filtering strategy of each organization differs because they have different monitoring needs and setups.
The following example demonstrates filtering out or dropping logs before sending them to Loki.
Paste this component next in the config.alloy
file:
loki.process "filter_logs" {
stage.drop {
source = ""
expression = ".*Connection closed by authenticating user root"
drop_counter_reason = "noisy"
}
forward_to = [loki.write.grafana_loki.receiver]
}
loki.process
is a component that allows you to transform, filter, parse, and enrich log data.
Within this component, you can define one or more processing stages to specify how you would like to process log entries before they’re stored or forwarded.
- The
filter_logs
component receives scraped log entries from thelog_scrape
component and uses thestage.drop
block to drop log entries based on specified criteria. - The
source
parameter is an empty string. This tells Alloy to scrape logs from the defaultlog_scrape
component. - The
expression
parameter contains the expression to drop from the logs. In this example, it’s the log message ".*Connection closed by authenticating user root". - You can include an optional string label
drop_counter_reason
to show the rationale for dropping log entries. You can use this label to categorize and count the drops to track and analyze the reasons for dropping logs. - The
forward_to
parameter specifies where to send the processed logs. In this example, you send the processed logs to a component you create next calledgrafana_loki
.
Check out the following tutorial and the loki.process
documentation for more comprehensive information on processing logs.
Fourth component: Write logs to Loki
Paste this component last in your configuration file:
loki.write "grafana_loki" {
endpoint {
url = "http://localhost:3100/loki/api/v1/push"
// basic_auth {
// username = "admin"
// password = "admin"
// }
}
}
This last component creates a loki.write component named grafana_loki
that points to http://localhost:3100/loki/api/v1/push
.
This completes the simple configuration pipeline.
Tip
Thebasic_auth
block is commented out because the localdocker compose
stack doesn’t require it. It’s included in this example to show how you can configure authorization for other environments. For further authorization options, refer to the loki.write component reference.
With this configuration, Alloy connects directly to the Loki instance running in the Docker container.
Reload the configuration
Copy your local
config.alloy
file into the default configuration file location.sudo cp config.alloy $(brew --prefix)/etc/alloy/config.alloy
sudo cp config.alloy /etc/alloy/config.alloy
Call the
/-/reload
endpoint to tell Alloy to reload the configuration file without a system service restart.curl -X POST http://localhost:12345/-/reload
Tip
This step uses the Alloy UI onlocalhost
port12345
. If you chose to run Alloy in a Docker container, make sure you use the--server.http.listen-addr=0.0.0.0:12345
argument. If you don’t use this argument, the debugging UI won’t be available outside of the Docker container.Optional: You can do a system service restart Alloy and load the configuration file:
brew services restart alloy
sudo systemctl reload alloy
Inspect your configuration in the Alloy UI
Open http://localhost:12345 and click the Graph tab at the top. The graph should look similar to the following:
![Your configuration in the Alloy UI](/media/docs/alloy/tutorial/Inspect-your-config-in-the-Alloy-UI-image.png)
The UI allows you to see a visual representation of the pipeline you built with your Alloy component configuration. We can see that the components are healthy, and you are ready to go.
Log in to Grafana and explore Loki logs
Open http://localhost:3000/explore to access Explore feature in Grafana. Select Loki as the data source and click the Label Browser button to select a file that Alloy has sent to Loki.
Here you can see that logs are flowing through to Loki as expected, and the end-to-end configuration was successful.
![Logs reported by Alloy in Grafana](/media/docs/alloy/tutorial/loki-logs.png)
Conclusion
Congratulations, you have installed and configured Alloy, and sent logs from your local host to a Grafana stack. In the following tutorials, you learn more about configuration concepts and metrics.