Menu
Grafana Cloud

Process logs with Grafana Alloy

This tutorial assumes you are familiar with setting up and connecting components. It covers using loki.source.api to receive logs over HTTP, processing and filtering them, and sending them to Loki.

Before you begin

To complete this tutorial:

Receive logs over HTTP and Process

The loki.source.api component can receive logs over HTTP. It can be useful for receiving logs from other Alloys or collectors, or directly from applications that can send logs over HTTP, and then processing them centrally.

Set up the loki.source.api component

Your pipeline is going to look like this:

An example logs pipeline

Start by setting up the loki.source.api component:

alloy
loki.source.api "listener" {
    http {
        listen_address = "127.0.0.1"
        listen_port    = 9999
    }

    labels = { source = "api" }

    forward_to = [loki.process.process_logs.receiver]
}

This is a simple configuration. You are configuring the loki.source.api component to listen on 127.0.0.1:9999 and attach a source="api" label to the received log entries, which are then forwarded to the loki.process.process_logs component’s exported receiver.

Process and Write Logs

Configure the loki.process and loki.write components

Now that you have set up the loki.source.api component, you can configure the loki.process and loki.write components.

alloy
// Let's send and process more logs!

loki.source.api "listener" {
    http {
        listen_address = "127.0.0.1"
        listen_port    = 9999
    }

    labels = { "source" = "api" }

    forward_to = [loki.process.process_logs.receiver]
}

loki.process "process_logs" {

    // Stage 1
    stage.json {
        expressions = {
            log = "",
            ts  = "timestamp",
        }
    }

    // Stage 2
    stage.timestamp {
        source = "ts"
        format = "RFC3339"
    }

    // Stage 3
    stage.json {
        source = "log"

        expressions = {
            is_secret = "",
            level     = "",
            log_line  = "message",
        }
    }

    // Stage 4
    stage.drop {
        source = "is_secret"
        value  = "true"
    }

    // Stage 5
    stage.labels {
        values = {
            level = "",
        }
    }

    // Stage 6
    stage.output {
        source = "log_line"
    }

    // This stage adds static values to the labels on the log line
    stage.static_labels {
        values = {
            source = "demo-api",
        }
    }

    forward_to = [loki.write.local_loki.receiver]
}

loki.write "local_loki" {
    endpoint {
        url = "http://localhost:3100/loki/api/v1/push"
    }
}

Put it all together

Now that you have all of the pieces, you can run Alloy and send some logs to it. Modify config.alloy with the configuration from the previous example and start Alloy with:

bash
<BINARY_FILE_PATH> run config.alloy

Replace the following:

  • <BINARY_FILE_PATH>: The path to the Alloy binary.

Try executing the following which inserts the current timestamp:

bash
curl localhost:9999/loki/api/v1/raw -XPOST -H "Content-Type: application/json" -d '{"log": {"is_secret": "false", "level": "debug", "message": "This is a debug message!"}, "timestamp":  "'"$(date -u +"%Y-%m-%dT%H:%M:%SZ")"'"}'

Now that you have sent some logs, its time to see how they look in Grafana. Navigate to http://localhost:3000/explore and switch the Datasource to Loki. Try querying for {source="demo-api"} and see if you can find the logs you sent.

Try playing around with the values of "level", "message", "timestamp", and "is_secret" and see how the logs change. You can also try adding more stages to the loki.process component to extract more values from the logs, or add more labels.

Example Loki Logs

Exercise

Since you are already using Docker and Docker exports logs, you can send those logs to Loki. Refer to the discovery.docker and loki.source.docker documentation for more information.

To ensure proper timestamps and other labels, make sure you use a loki.process component to process the logs before sending them to Loki.

Although you haven’t used it before, you can use a discovery.relabel component to attach the container name as a label to the logs. You can refer to the discovery.relabel documentation for more information. The discovery.relabel component is very similar to the prometheus.relabel component, but it relabels discovered targets rather than metrics.