LogsCollect logs with Promtail or Grafana Agent

Collect logs with either Promtail or the Grafana Agent and push into Grafana Cloud

Loki is the main server, responsible for storing logs and processing queries. Grafana Cloud includes Loki, so you don’t need to perform a Loki installation, you just need configure some settings within Grafana Cloud so that logs are aggregated and stored correctly. This is what enables log storage, which powers both visualization and querying.

Promtail is the agent, developed by the Loki team and with releases that correspond to Loki releases. The Grafana Agent includes Promtail capabilities.

Collecting logs into Grafana Cloud is a two-step process. You must:

  • Configure Grafana Cloud
  • Install and configure a way to send logs; here are the options in this page:
    • Install Promtail and send logs from a single node
    • Install Promtail and send logs from a Kubernetes cluster
    • Configure the Grafana Agent to send logs from a single node
    • Configure the Grafana Agent to send logs from a Kubernetes cluster

Step 1. Configure Grafana Cloud to receive the logs

To begin, Create a Grafana Cloud API key with the MetricsPublisher role. Save this information as you will need the API key in a later step.

Open Grafana Cloud. In the side menu, from Settings (looks like a gear) select Data Sources.

On the Configuration page that opens, in the Data Sources tab (which you should already be in), click Add data source.

From the list of options, select Loki. Keep this open in a browser tab.

Open a different browser tab and open Grafana Cloud. In this tab:

  1. In the side menu, from Onboarding (looks like a lightning bolt) select Walkthrough.
  2. Find and select Loki, scroll down and click Next step.
  3. Click Send Logs. In the Loki box, click Details.
  4. Find the listed settings for your organization.

Go back to the previous tab where we were configuring the Loki data source. Enter the information you found for Name, URL, User, the API key you created earlier, and check the Basic Auth box.

Save and move to Step 2.

Step 2. Option 1. Install and configure Promtail to send logs

There are two options listed here for configuring Promtail:

  • Install and send logs from a single node
  • Install and send logs from a Kubernetes cluster

Option A. Install Promtail and send logs from a single node

Before you begin, you must create a configuration file. Our example is a Linux YAML file called config.yaml and saved in /etc/promtail/. Use these contents, with your newly-created API key from Step 1 replacing <Your Grafana.com API Key>and <User> with the user number you found while creating the Loki data source in Grafana Cloud.

server:
  http_listen_port: 0
  grpc_listen_port: 0

positions:
  filename: /tmp/positions.yaml

client:
  url: https://<User>:<Your Grafana.com API Key>@logs-prod-us-central1.grafana.net/api/prom/push

scrape_configs:
- job_name: system
  static_configs:
  - targets:
      - localhost
    labels:
      job: varlogs
      __path__: /var/log/*.log

Loki is open source, and therefore its agent Promtail is as well. You are welcome to open the releases page of the official Loki GitHub repo to find and download the latest binary release of Promtail for your system architecture, but it is much easier to install and run using Docker, like this:

docker run --name promtail --volume "$PWD/promtail:/etc/promtail" --volume "/var/log:/var/log" grafana/promtail:master -config.file=/etc/promtail/config.yaml

Option B. Install Promtail and send logs from a Kubernetes cluster

You won’t create a YAML file using this method. Instead, you will insert your newly-created API key from Step 1. In the command, replace <Your Grafana.com API Key> with that key and replace <User> with the user number you found while creating the Loki data source in Grafana Cloud.

curl -fsS https://raw.githubusercontent.com/grafana/loki/master/tools/promtail.sh | sh -s <User> <Your Grafana.com API Key> logs-prod-us-central1.grafana.net default | kubectl apply --namespace=default -f  -

Step 2. Option 2. Configure the Grafana Agent to send logs

This option covers configuring the Grafana Agent to send logs and assumes you are collecting logs from a Linux host.

There are two options listed here for configuring the Grafana Agent:

  • Install and send logs from a single node
  • Install and send logs from a Kubernetes cluster

Option A. Install the agent on a single node and send logs

The contents of this section come from the Gathering logs from a Linux host using the Grafana Agent quickstart.

Because your Linux machine is already running the agent, configuring it to send logs along with whatever metrics it is already sending is accomplished by modifying the agent configuration YAML file.

The agent configuration is stored in /etc/grafana-agent.yaml. Open the file and add this new section, below the Prometheus section (if it exists) and the Integrations section (created when you installed an integration). The new section should start at the root-level of indentation (all the way at the left margin in the file).

Use these contents, with your newly-created API key from Step 1 replacing <Your Grafana.com API Key>and <User> with the user number you found while creating the Loki data source in Grafana Cloud. The URL in our sample is for most US-based customers. Yours may differ. Use the URL you found while configuring your Grafana Cloud in the previous section.

loki:
  configs:
  - name: default
    positions:
      filename: /tmp/positions.yaml
    scrape_configs:
      - job_name: varlogs
        static_configs:
          - targets: [localhost]
            labels:
              job: varlogs
              __path__: /var/log/*log
    clients:
      - url: http://logs-prod-us-central1.grafana.net/loki/api/v1/push
        basic_auth:
          username: <User>
          password: <Your Grafana.com API Key>

This example will scrape and send info from all logs in /var/log that end in log. They are labeled with varlogs as the job and job_name.

NOTE: Read Loki label best practices to learn how to use labels effectively for the best experience.

You can add additional sections for logs in other locations or with other filenames. For example, here’s one for dmesg, which you would place in the static_configs: section and before the clients: section.

      - job_name: dmesg
        static_configs:
          - targets: [localhost]
            labels:
              job: dmesg
              __path__: /var/log/dmesg

Here is another example, scraping logs for a minecraft server with logs stored in a subdirectory of the /home directory of a special minecraft user.

      - job_name: minecraftlog
        static_configs:
          - targets: [localhost]
            labels:
              job: minecraft
              __path__: /home/MCuser/minecraft/logs/latest.log

Anytime you change the agent configuration, you must restart the agent for the new configuration to take effect.

sudo systemctl restart grafana-agent.service

Option B. Install the agent and send logs from a Kubernetes cluster

You won’t create a YAML file using this method. Instead, you will insert your newly-created API key from Step 1. In the command, replace <Your Grafana.com API Key> with that key and replace <User> with the user number you found while creating the Loki data source in Grafana Cloud.

NAMESPACE="default" /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/grafana/agent/release/production/kubernetes/install-loki.sh)" | kubectl apply -f -

Querying logs and creating panels

Once you have Promtail up and running on your log source, give it some time to start collecting logs. Eventually, you will be able to query logs and create panels inside dashboards using Loki as a datasource.

Querying logs is done using LogQL which can be used in both Explore and when creating dashboard panels.