Grafana Cloud

Configure Application Load Balancer logs

Send AWS Application Load Balancer access logs from Amazon S3 to AWS observability in Grafana Cloud Provider using the Lambda Promtail function.

For label reference and other S3-based AWS logs, refer to Logs with Lambda. For EventBridge templates, SQS, and relabeling, refer to the Lambda Promtail client documentation.

Before you begin

Before configuring Logs with Lambda, make sure you have:

  • A Grafana Cloud account with a Grafana Cloud stack.
  • The Cloud Provider AWS Writer app plugin role or Grafana Cloud Admin basic role.
  • An AWS account with permissions to create Lambda functions, IAM roles, and either Amazon CloudWatch subscription filters or S3 notifications in the same AWS Region as your log source.
  • The AWS CLI configured if you plan to use Terraform.
  • The Grafana Cloud Configuration details values for Loki (Grafana Cloud Loki write address and username, and a Grafana Cloud access policy token), available on the Application Load Balancer Logs page in Grafana Cloud Provider.

Before you can begin configuring AWS Application Load Balancer (ALB) logs with Lambda Promtail, you must navigate to the Application Load Balancer Logs page in Grafana Cloud Provider which includes the steps and key values you need.

To navigate to the Application Load Balancer Logs page:

  1. Open your Grafana Cloud portal.
  2. Expand Observability > Cloud provider > AWS in the main menu of your Grafana Cloud stack.
  3. Click the Configuration tab.
  4. Click the Application Load Balancer Logs tile.

Then continue with CloudFormation or Terraform.

Configure with CloudFormation

The provided CloudFormation template can get you started quickly and requires little additional configuration. CloudFormation is a great option for AWS-native infrastructure definitions.

Before you begin

Before configuring logs using CloudFormation, make sure you have:

  • ALB access logging enabled and writing objects to an S3 bucket in the same AWS region where Lambda Promtail runs.
  • A separate S3 bucket in that region to store the Lambda Promtail zip artifact.

Upload Lambda Promtail

Before launching the CloudFormation stack, you need to upload the Lambda Promtail compressed binary file to the AWS S3 bucket in the same region as the AWS S3 bucket with your ALB logs.

To upload the Lambda Promtail compressed binary to your S3 bucket:

  1. On the Application Load Balancer Logs page in Grafana Cloud Provider, click Use CloudFormation.

  2. Copy the Lambda Promtail upload command.

  3. Update the command with YOUR-BUCKET-NAME and YOUR-REGION-NAME and run it against the artifact bucket in your region.

    This command uploads the compressed Lambda Promtail build to an S3 bucket in the same AWS region where the Lambda function runs and where your CloudWatch log groups are stored.

Create a Grafana Cloud access policy token

Create a Grafana Cloud access policy token with the required permissions, so Lambda Promtail can authenticate to Grafana Cloud.

To generate a Grafana Cloud access policy token with the required permissions:

  1. On the Application Load Balancer page in Grafana Cloud Provider, in the API token name field, enter a name for the token.

  2. Select an Expiration date.

    We recommend you select a specific expiration date and do not set the Expiration date to No expiry, as this can create a security vulnerability.

  3. Click Create token.

  4. Copy the generated key and store it securely.

Enable Amazon EventBridge on the access log bucket

The Lambda Promtail client must run when new log objects appear. If you use EventBridge (recommended for many CloudFormation layouts to avoid circular dependencies between the bucket notification and the Lambda resource), enable S3 Event Notifications to EventBridge on the access logs bucket:

Launch the CloudFormation stack

After you have uploaded the Lambda Promtail binary file to our AWS S3 bucket, generated a Grafana Cloud access policy token, and enabled Amazon EventBridge on the access log bucket, you can launch and customize the CloudFormation stack for Application Load Balance logs.

To launch and customize the CloudFormation stack for Application Load Balancer logs:

  1. Click Launch stack to open the AWS CloudFormation console.
  2. Complete the stack parameters, including the S3 bucket that contains ALB access logs so the function can read new objects.

Configure with Terraform

The provided Terraform sample code snippets can get you started quickly and require little additional configuration. Terraform is a great option for repeatable infrastructure-as-code deployments with support for arrays of log groups, buckets, and network settings.

To configure AWS Application Load Balancer logs with Lambda using Terraform, on the Application Load Balancer Logs page in Grafana Cloud Provider click Use Terraform.

Create a Grafana Cloud access policy token

Create a Grafana Cloud access policy token with the required permissions so Lambda Promtail can authenticate to Grafana Cloud.

To generate a Grafana Cloud access policy token with the required permissions:

  1. On the Application Load Balancer Logs page in Grafana Cloud Provider, in the API token name field, enter a name for the token.

  2. Select an Expiration date.

    We recommend you select a specific expiration date and do not set the Expiration date to No expiry, as this can create a security vulnerability.

  3. Click Create token.

  4. Copy the generated key and store it securely.

Terraform setup

Configure the AWS CLI for the region where the ALB logs bucket, artifact bucket, and Lambda function reside.

  1. Copy the following Terraform snippet into a main.tf file (or merge into your module):

    hcl
    terraform {
      required_providers {
        aws = {
          source  = "hashicorp/aws"
          version = "~> 5.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_s3_object_copy" "lambda_promtail_zipfile" {
      bucket = var.s3_bucket
      key    = var.s3_key
      source = "grafanalabs-cf-templates/lambda-promtail/lambda-promtail.zip"
    }
    
    resource "aws_iam_role" "lambda_promtail_role" {
      name = "GrafanaLabsALBLogsIntegration"
    
      assume_role_policy = jsonencode({
        "Version" : "2012-10-17",
        "Statement" : [
          {
            "Action" : "sts:AssumeRole",
            "Principal" : {
              "Service" : "lambda.amazonaws.com"
            },
            "Effect" : "Allow"
          }
        ]
      })
    }
    
    resource "aws_iam_role_policy" "lambda_promtail_policy_alb_logs" {
      name = "alb-logs"
      role = aws_iam_role.lambda_promtail_role.name
      policy = jsonencode({
        "Statement" : [
          {
            "Action" : [
              "logs:CreateLogGroup",
              "logs:CreateLogStream",
              "logs:PutLogEvents"
            ],
            "Effect" : "Allow",
            "Resource" : "arn:aws:logs:*:*:*"
          },
          {
            "Action" : [
              "s3:GetObject"
            ],
            "Effect" : "Allow",
            "Resource" : format("arn:aws:s3:::%s/*", var.access_logs_s3_bucket)
          }
        ]
      })
    }
    
    resource "aws_lambda_function" "lambda_promtail" {
      function_name = "GrafanaCloudLambdaPromtail"
      role          = aws_iam_role.lambda_promtail_role.arn
    
      timeout     = 60
      memory_size = 128
    
      handler   = "main"
      runtime   = "provided.al2023"
      s3_bucket = var.s3_bucket
      s3_key    = var.s3_key
    
      environment {
        variables = {
          WRITE_ADDRESS = var.write_address
          USERNAME      = var.username
          PASSWORD      = var.password
          BATCH_SIZE    = var.batch_size
          EXTRA_LABELS  = var.extra_labels
        }
      }
    
      depends_on = [
        aws_s3_object_copy.lambda_promtail_zipfile,
        aws_iam_role_policy.lambda_promtail_policy_alb_logs
      ]
    }
    
    resource "aws_lambda_function_event_invoke_config" "lambda_promtail_invoke_config" {
      function_name          = aws_lambda_function.lambda_promtail.function_name
      maximum_retry_attempts = 2
    }
    
    resource "aws_lambda_permission" "lambda_promtail_allow_s3" {
      statement_id  = "lambda-promtail-allow-s3"
      action        = "lambda:InvokeFunction"
      function_name = aws_lambda_function.lambda_promtail.function_name
      principal     = "s3.amazonaws.com"
    }
    
    resource "aws_s3_bucket_notification" "bucket_notification" {
      bucket = var.access_logs_s3_bucket
    
      lambda_function {
        lambda_function_arn = aws_lambda_function.lambda_promtail.arn
        events              = ["s3:ObjectCreated:*"]
      }
    
      depends_on = [
        aws_lambda_permission.lambda_promtail_allow_s3
      ]
    }
    
    output "lambda_arn" {
      value       = aws_lambda_function.lambda_promtail.arn
      description = "ARN of the Lambda function that runs lambda-promtail."
    }

    Note

    Putting aws_s3_bucket_notification and the Lambda function in the same Terraform apply can fail on first run because of AWS ordering rules. If that happens, apply again, or use Amazon EventBridge for S3 events as described in the Lambda Promtail client documentation.

  2. Copy the following Terraform snippet into variables.tf:

    hcl
    variable "aws_region" {
      type        = string
      description = "AWS Region for all resources in this example."
      default     = "us-west-2"
    }
    
    variable "write_address" {
      type        = string
      description = "Grafana Cloud Loki push URL."
      default     = ""
    }
    
    variable "username" {
      type        = string
      description = "Basic auth username for Grafana Cloud Loki."
      default     = ""
    }
    
    variable "password" {
      type        = string
      description = "Basic auth password for Grafana Cloud Loki (Grafana.com API key)."
      sensitive   = true
      default     = ""
    }
    
    variable "s3_bucket" {
      type        = string
      description = "Bucket that holds lambda-promtail.zip."
      default     = ""
    }
    
    variable "s3_key" {
      type        = string
      description = "Object key for lambda-promtail.zip."
      default     = "lambda-promtail.zip"
    }
    
    variable "extra_labels" {
      type        = string
      description = "Comma-separated pairs: name1,value1,name2,value2,..."
      default     = ""
    }
    
    variable "batch_size" {
      type        = string
      description = "Flush threshold in bytes."
      default     = ""
    }
    
    variable "access_logs_s3_bucket" {
      type        = string
      description = "S3 bucket where ALB access logs are written."
      default     = ""
    }
  3. On the Application Load Balancer Logs page in Grafana Cloud Provider, copy the Grafana Cloud Loki write address, and Grafana Cloud Loki username, and your Grafana Cloud access policy token into a Terraform variables file (for example grafana.auto.tfvars).

    Warning

    Do not commit secrets to version control.

    hcl
    write_address = "@@@https://logs-prod-...grafana.net/loki/api/v1/push@@@" // Grafana Cloud Loki write address
    username      = "@@@LOKI_USERNAME@@@" // Grafana Cloud Loki username
    password      = "@@@ACCESS_POLICY_TOKEN@@@" // Grafana Cloud access policy token
    s3_bucket     = "@@@my-lambda-artifacts@@@"
    access_logs_s3_bucket = "@@@my-access-logs-bucket@@@"
  4. Update access_logs_s3_bucket and s3_bucket to the correct bucket names.

  5. Initialize and apply using the following command:

    Bash
    terraform init
    terraform apply

For VPC configuration and larger examples, refer to the example main.tf in the Lambda Promtail repository.

Explore logs

After you have connected your AWS Application Load Balancer logs to Grafana Cloud using Lambda Promtail, you can start exploring.

To explore your logs in Grafana Cloud:

  1. On the Application Load Balancer Logs page in Grafana Cloud Provider, click Go to Explore.

  2. In Explore, use a LogQL selector such as:

    {__aws_log_type=~"s3_lb"}

Labels for this workflow

Load balancer access logs include the following labels: __aws_s3_log_lb and __aws_s3_log_lb_owner. Extra labels use the ExtraLabels CloudFormation parameter or the EXTRA_LABELS environment variable. For details, refer to Labels in Logs with Lambda.