This is documentation for the next version of Grafana Alloy Documentation. For the latest stable release, go to the latest version.

Open source

Collect Kubernetes logs and forward them to Loki

You can configure Alloy to collect logs and forward them to a Loki database.

To collect Kubernetes logs, you:

  1. Configure a loki.write component to deliver logs.
  2. Set up collection components for system logs, Pod logs, or Kubernetes Cluster Events.

Components used

Before you begin

  • Ensure that you’re familiar with log labeling when working with Loki.
  • Identify where you want to write collected logs. You can write logs to Loki endpoints such as Grafana Loki, Grafana Cloud, or Grafana Enterprise Logs.
  • Ensure that you’re familiar with the concept of Components in Alloy.

Configure logs delivery

Before components can collect logs, you must have a component responsible for writing those logs somewhere.

The loki.write component delivers logs to a Loki endpoint. After you define a loki.write component, other Alloy components can forward logs to it.

To configure a loki.write component for logs delivery, complete the following steps:

  1. Add the following loki.write component to your configuration file.

    Alloy
    loki.write "<LABEL>" {
      endpoint {
        url = "<LOKI_URL>"
      }
    }

    Replace the following:

    • <LABEL>: The label for the component, such as default. The label you use must be unique across all loki.write components in the same configuration file.
    • <LOKI_URL>: The full URL of the Loki endpoint where you send logs, such as https://logs-us-central1.grafana.net/loki/api/v1/push.
  2. If your endpoint requires basic authentication, paste the following inside the endpoint block.

    Alloy
    basic_auth {
      username = "<USERNAME>"
      password = "<PASSWORD>"
    }

    Replace the following:

    • <USERNAME>: The basic authentication username.
    • <PASSWORD>: The basic authentication password or API key.
  3. If you have more than one endpoint to write logs to, repeat the endpoint block for additional endpoints.

The following example demonstrates configuring loki.write with multiple endpoints, mixed usage of basic authentication, and a loki.source.file component. The loki.source.file component collects logs from the file system on the Alloy container.

Alloy
loki.write "default" {
  endpoint {
    url = "http://localhost:3100/loki/api/v1/push"
  }

  endpoint {
    url = "https://logs-us-central1.grafana.net/loki/api/v1/push"

    // Get basic authentication based on environment variables.
    basic_auth {
      username = "<USERNAME>"
      password = "<PASSWORD>"
    }
  }
}

loki.source.file "example" {
  // Collect logs from the default listen address.
  targets = [
    {__path__ = "/tmp/foo.txt", "color" = "pink"},
    {__path__ = "/tmp/bar.txt", "color" = "blue"},
    {__path__ = "/tmp/baz.txt", "color" = "grey"},
  ]

  forward_to = [loki.write.default.receiver]
}

Replace the following:

  • <USERNAME>: The remote write username.
  • <PASSWORD>: The remote write password.

For more information on configuring logs delivery, refer to loki.write.

Collect logs from Kubernetes

You can configure Alloy to collect all kinds of logs from Kubernetes:

  1. System logs
  2. Pod logs
  3. Kubernetes Events

With the component architecture, you can follow one or all of the following sections. Complete the Configure logs delivery section first, then proceed to the log type you want to collect.

System logs

To collect system logs, use the following components:

The following example collects system logs:

Alloy
// local.file_match discovers files on the local filesystem using glob patterns and the doublestar library. It returns an array of file paths.
local.file_match "node_logs" {
  path_targets = [{
      // Monitor syslog to scrape node-logs
      __path__  = "/var/log/syslog",
      job       = "node/syslog",
      node_name = sys.env("HOSTNAME"),
      cluster   = <CLUSTER_NAME>,
  }]
}

// loki.source.file reads log entries from files and forwards them to other loki.* components.
// You can specify multiple loki.source.file components by giving them different labels.
loki.source.file "node_logs" {
  targets    = local.file_match.node_logs.targets
  forward_to = [loki.write.<WRITE_COMPONENT_NAME>.receiver]
}

Replace the following values:

  • <CLUSTER_NAME>: The label for this specific Kubernetes cluster, such as production or us-east-1.
  • <WRITE_COMPONENT_NAME>: The name of your loki.write component, such as default.

Pod logs

Tip

You can also get Pod logs through the log files on each node, but that approach requires system privileges.

Note

When you deploy Alloy as a DaemonSet, ensure that you configure discovery to only collect logs from the same node.

Use the following components to collect Pod logs:

The following example uses the Kubernetes API to collect logs, which doesn’t require system privileges:

Alloy
// discovery.kubernetes allows you to find scrape targets from Kubernetes resources.
// It watches cluster state and ensures targets are continually synced with what is currently running in your cluster.
discovery.kubernetes "pod" {
  role = "pod"
  // Restrict to pods on the node to reduce cpu & memory usage
  selectors {
    role = "pod"
    field = "spec.nodeName=" + coalesce(sys.env("HOSTNAME"), constants.hostname)
  }
}

// discovery.relabel rewrites the label set of the input targets by applying one or more relabeling rules.
// If no rules are defined, then the input targets are exported as-is.
discovery.relabel "pod_logs" {
  targets = discovery.kubernetes.pod.targets

  // Label creation - "namespace" field from "__meta_kubernetes_namespace"
  rule {
    source_labels = ["__meta_kubernetes_namespace"]
    action = "replace"
    target_label = "namespace"
  }

  // Label creation - "pod" field from "__meta_kubernetes_pod_name"
  rule {
    source_labels = ["__meta_kubernetes_pod_name"]
    action = "replace"
    target_label = "pod"
  }

  // Label creation - "container" field from "__meta_kubernetes_pod_container_name"
  rule {
    source_labels = ["__meta_kubernetes_pod_container_name"]
    action = "replace"
    target_label = "container"
  }

  // Label creation -  "app" field from "__meta_kubernetes_pod_label_app_kubernetes_io_name"
  rule {
    source_labels = ["__meta_kubernetes_pod_label_app_kubernetes_io_name"]
    action = "replace"
    target_label = "app"
  }

  // Label creation -  "job" field from "__meta_kubernetes_namespace" and "__meta_kubernetes_pod_container_name"
  // Concatenate values __meta_kubernetes_namespace/__meta_kubernetes_pod_container_name
  rule {
    source_labels = ["__meta_kubernetes_namespace", "__meta_kubernetes_pod_container_name"]
    action = "replace"
    target_label = "job"
    separator = "/"
    replacement = "$1"
  }

  // Label creation - "__path__" field from "__meta_kubernetes_pod_uid" and "__meta_kubernetes_pod_container_name"
  // Concatenate values __meta_kubernetes_pod_uid/__meta_kubernetes_pod_container_name.log
  rule {
    source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
    action = "replace"
    target_label = "__path__"
    separator = "/"
    replacement = "/var/log/pods/*$1/*.log"
  }

  // Label creation -  "container_runtime" field from "__meta_kubernetes_pod_container_id"
  rule {
    source_labels = ["__meta_kubernetes_pod_container_id"]
    action = "replace"
    target_label = "container_runtime"
    regex = `^(\S+):\/\/.+$`
    replacement = "$1"
  }
}

// loki.source.kubernetes tails logs from Kubernetes containers using the Kubernetes API.
loki.source.kubernetes "pod_logs" {
  targets    = discovery.relabel.pod_logs.output
  forward_to = [loki.process.pod_logs.receiver]
}

// loki.process receives log entries from other Loki components, applies one or more processing stages,
// and forwards the results to the list of receivers in the component's arguments.
loki.process "pod_logs" {
  stage.static_labels {
      values = {
        cluster = "<CLUSTER_NAME>",
      }
  }

  forward_to = [loki.write.<WRITE_COMPONENT_NAME>.receiver]
}

Replace the following values:

  • <CLUSTER_NAME>: The label for this specific Kubernetes cluster, such as production or us-east-1.
  • <WRITE_COMPONENT_NAME>: The name of your loki.write component, such as default.

Tip

Use raw strings delimited by backticks for regex values. Raw strings don’t process escape sequences, so patterns like \S and \d work without double escaping. Double-quoted strings require escaping, for example regex = "\\S" instead of regex = "\S". If you forget to escape, you get unknown escape sequence errors.

Note

Refer to [Limit to only Pods on the same node][discovery.kubernetes_samenode] for more information about restricting to Pods on the same node.

Kubernetes Cluster Events

Use the following components to collect Kubernetes cluster events:

The following example collects Kubernetes cluster events:

Alloy
// loki.source.kubernetes_events tails events from the Kubernetes API and converts them
// into log lines to forward to other Loki components.
loki.source.kubernetes_events "cluster_events" {
  job_name   = "integrations/kubernetes/eventhandler"
  log_format = "logfmt"
  forward_to = [
    loki.process.cluster_events.receiver,
  ]
}

// loki.process receives log entries from other loki components, applies one or more processing stages,
// and forwards the results to the list of receivers in the component's arguments.
loki.process "cluster_events" {
  forward_to = [loki.write.<WRITE_COMPONENT_NAME>.receiver]

  stage.static_labels {
    values = {
      cluster = "<CLUSTER_NAME>",
    }
  }

  stage.labels {
    values = {
      kubernetes_cluster_events = "job",
    }
  }
}

Replace the following values:

  • <CLUSTER_NAME>: The label for this specific Kubernetes cluster, such as production or us-east-1.
  • <WRITE_COMPONENT_NAME>: The name of your loki.write component, such as default.

Next steps