Menu
Grafana Cloud

Configure Logs with Firehose

Logs with Firehose leverages Amazon Data Firehose and a minimal infrastructure to deliver logs to the ingestion pipeline within Grafana Cloud.

Before you begin

You need the following information to complete configuration successfully:

  • Target endpoint URL: The correct endpoint for your Grafana Cloud instance
  • Loki User: The numeric value of the User

To obtain these items, perform the following steps:

  1. Navigate to your Grafana Cloud portal.
  2. Select your Grafana Cloud stack.
  3. Locate the Loki tile, and click Details.
  4. Copy and save the values in the URL field (which is the target endpoint) and in the User field for use in future steps.

Set up required authentication

When Grafana Cloud receives logs from AWS, the access policy and its associated authentication token enables Grafana to:

  • Authenticate the request.
  • Determine which customer the data is coming from so Grafana can store it appropriately in Loki.

To create an access policy for your organization in the Grafana Cloud portal, refer to the Create an access policy for an organization or multiple stacks steps.

  1. In step 8, in the Scopes section, for logs select Write to allow logs to write to your account.
  2. After you create the policy, click Add token to generate a token to authenticate the provider with the Cloud Provider API.
  3. Give your token an appropriate name and select an Expiration date. We recommend you select a specific expiration date and do not set the Expiration date to No expiry, as this can create a security vulnerability.
  4. Click Copy to clipboard and save the new token. In a future step, you replace LOGS_WRITE_TOKEN with this token.

Set up on AWS account

You must create some infrastructure in your AWS account for sending the logs to Grafana Cloud:

  • Authentication components
  • A component to route logs into the delivery stream
  • The Data Firehose delivery stream
Infrastructure in AWS account
Infrastructure in AWS account

You can create this infrastructure with Terraform or CloudFormation.

Set up with Terraform

  1. Download the Terraform snippet file.

  2. Open the snippet file, and complete the sections labeled with FILLME as shown in the following example:

    terraform
    provider "aws" {
    // FILLME: AWS region
    region = "us-east-2"
    
    // FILLME: local AWS profile to use
    profile = "test-profile"
    }
  3. Run terraform apply, including the required variables as shown in the following example:

    Bash
    terraform apply \
        -var="fallback_bucket_name=<Name for an s3 bucket, to save data in case of failures>" \
        -var="firehose_stream_name=<Kinesis stream name>" \
        -var="target_endpoint=<Target AWS Logs endpoint provided by Grafana>" \
        -var="logs_instance_id=<Loki User>" \
        -var="logs_w
    • fallback_bucket_name: The name of an S3 bucket where logs can be stored in case the delivery fails
    • firehose_stream_name: Enter a meaningful name for the Data Firehose stream
    • target_endpoint: The target endpoint URL for your Grafana Cloud instance. Add the prefix aws-. For example, if your Loki URL is https://logs-prod3.grafana.net, then your Logs with Firehose URL will be https://aws-logs-prod3.grafana.net/aws-logs/api/v1/push.
    • logs_instance_id: The numeric value for User field of the Loki data source
    • logs_write_token: The token you created for Data Firehose

Set up with CloudFormation

  1. Download the CloudFormation snippet file.

  2. Run the following aws cloudformation create-stack, including the required variables:

    Bash
    aws cloudformation create-stack --stack-name grafana-aws-logs-firehose --template-body file://./aws-logs-firehose.yaml --capabilities CAPABILITY_NAMED_IAM \
    --parameters \
        ParameterKey=FallbackS3BucketName,ParameterValue=aws-logs-fallback \
        ParameterKey=FirehoseStreamName,ParameterValue=grafana-aws-logs \
        ParameterKey=TargetEndpoint,ParameterValue=TARGET_ENDPOINT \
        ParameterKey=LogsInstanceID,ParameterValue=LOKI_USER \
        ParameterKey=LogsWriteToken,ParameterValue=LOGS_WRITE_TOKEN \
        ParameterKey=SubscriptionFilter,ParameterValue=subscriptionFilter
    • FallbackS3BucketName: The name of an S3 bucket where logs can be stored in case the delivery fails
    • FirehoseStreamName: Enter a meaningful name for the Data Firehose stream
    • TargetEndpoint: The target endpoint URL for your Grafana Cloud instance. Add the prefix aws-. For example, if your Loki URL is https://logs-prod3.grafana.net, then your Logs with Firehose URL will be https://aws-logs-prod3.grafana.net/aws-logs/api/v1/push.
    • LogsInstanceID: The numeric value for User field of the Loki data source
    • LogsWriteToken: The token you created for Data Firehose
    • SubscriptionFilter: Name of the subscription filter from where to pull logs from.

The CloudFormation template is designed to ship logs from a single Amazon CloudWatch LogGroup. Since a single Amazon Data Firehose Delivery Stream can be used for sending logs from many CloudWatch LogGroups, we suggest you edit the template and define as many Subscription Filters as needed, referencing the same delivery stream.

Set up CloudWatch subscription filter

The CloudWatch subscription filter:

  • Reads logs from a select CloudWatch log group.
  • Optionally filters the logs.
  • Sends the logs to the Data Firehose stream.

Configure with AWS CLI

Use the following example to create a subscription filter with the AWS CLI:

Bash
aws logs put-subscription-filter \
  --log-group-name "<log group name to send logs from>" \
  --filter-name "<Name of the subscription filters>" \
  --filter-pattern "<Optional filter expression>" \
  --destination-arn "<ARN of the Kinesis firehose stream created above>" \
  --role-arn "<ARN of the IAM role created for sending logs above>"

Configure with Terraform

Use the following example to configure an AWS subscription filter with Terraform, and include the required variables:

terraform
resource "aws_cloudwatch_log_subscription_filter" "filter" {
  name            = "filter_name"
  role_arn        = aws_iam_role.logs.arn
  log_group_name  = "/aws/lambda/example_lambda_name"
  filter_pattern  = "" // Optional: Filter expression
  destination_arn = aws_kinesis_firehose_delivery_stream.main.arn
  distribution    = "ByLogStream"
}
  • name: Enter a meaningful name for the ARN
  • role_arn: ARN of the IAM role created in previous step for sending logs
  • log_group_name: The log group where the logs should be sent from
  • filter_pattern: An optional filter expression
  • destination_arn: Using ARN of Firehose delivery stream created in previous snippet

Configure with CloudFormation

Use the following example to configure an AWS subscription filter with CloudFormation, and include the required variables:

Bash
SubscriptionFilter:
  Type: AWS::Logs::SubscriptionFilter
  Properties:
    DestinationArn: "<Firehose delivery stream ARN>"
    FilterPattern: ""
    LogGroupName: "/aws/lambda/test-lambda"
    RoleArn: "<IAM Role for sending logs ARN, created in the steps above>"
  • FilterPattern: An optional filter expression
  • LogGroupName: The log group the logs should be sent from
  • DestinationArn: ARN of the Data Firehose delivery stream, created in the previous steps
  • RoleArn: ARN of the IAM role for sending logs, created in the previous steps

Set static labels in Terraform

Setting static labels offers you an additional way to filter and group your metrics in Grafana Cloud. Set static labels using the X-Amz-Firehose-Common-Attributes header. Configure the request_configuration block, with a common_attributes sub-block of the Amazon Data Firehose delivery stream configuration, as in the following example:

tf
http_endpoint_configuration {
  url        = var.target_endpoint
  name       = "Grafana AWS Logs Destination"
  access_key = format("%s:%s", var.logs_instance_id, var.logs_write_token)

  // Buffer incoming data to the specified size, in MBs, before delivering it to the destination
  buffering_size = 1

  // Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination
  //
  // Setting to 1 minute to keep a low enough latency between log production and actual time they are processed in Loki
  buffering_interval = 60

  role_arn       = aws_iam_role.firehose.arn
  s3_backup_mode = "FailedDataOnly"

  request_configuration {
    content_encoding = "GZIP"
  }

    common_attributes {
      name  = "lbl_testname"
      value = "testvalue"
    }
    common_attributes {
      name  = "lbl_testname2" \\ static label names must be previxed with `lbl_`
      value = "testvalue2" \\ static label names and values must be compatible with the Prometheus data model specification
    }
  ...
}

Label names must be prefixed with lbl_ and the label names and values must be compatible with the Prometheus data model specification.

When you query in Grafana for the static labels, do not include the lbl_ prefix, as in the following example query:

{job=~"cloud/aws/.+", label1="value`", label2="value2"}