---
title: "Configure Logs with Firehose | Grafana Cloud documentation"
description: "Configuration steps for AWS Logs with Firehose"
---

# Configure logs with Amazon Data Firehose

Sending AWS logs with [Amazon Data Firehose](https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html) requires a minimal infrastructure to deliver logs to the ingestion pipeline within Grafana Cloud.

## Before you begin

You need the following information to complete configuration successfully:

- **Grafana Cloud AWS Logs Ingest URL:** The Grafana Cloud AWS Logs Ingest URL is the URL you configure in the Data Firehose to send metrics to your Grafana Cloud instance.
- **Loki User:** The numeric value of the **User**

To obtain these items, perform the following steps:

1. Navigate to your [Grafana Cloud portal](/docs/grafana-cloud/account-management/cloud-portal/).
2. Select your Grafana Cloud stack.
3. Locate the Loki tile, and click **Details**.
   
   [](/media/docs/grafana-cloud/aws/screenshot-loki-details-button.png)
4. Copy and save the value in the **URL** field (which can be used to derive the Grafana Cloud AWS Logs Ingest URL).
   
   [](/media/docs/grafana-cloud/aws/screenshot-datasource-settings.png)
5. Derive your Grafana Cloud AWS Logs Ingest URL from your Loki URL using the following steps:
   
   1. Prepend `aws-` to the hostname of the value in the URL field. The hostname of this value is the Loki cell ID.
   2. Append the path `/aws-logs/api/v1/push` to the value in the **URL** field, as in the following URL template: `https://aws-<LOKI_CELL_ID>.grafana.net/aws-logs/api/v1/push`
      
      - For example, if your Loki URL is `https://logs-prod3.grafana.net`, then your Loki cell Id is `logs-prod3` and your Grafana Cloud AWS Logs Ingest URL is `https://aws-logs-prod3.grafana.net/aws-logs/api/v1/push`.
6. Copy and save the value in the **User** field, which is the value you need for the `logs_instance_id`.

> Note
> 
> If this endpoint does not work, your Grafana Cloud Provider API endpoint may be using the new URL format. For more information on Grafana Cloud URLs, refer to [Determine Grafana Cloud URLs based on region](/docs/grafana-cloud/security-and-account-management/region-url-formats/).

### Obtain the ingest URL and Instance ID programmatically

You need an access policy token with the `stacks:read` scope to obtain the Grafana Cloud AWS Logs Ingest URL and Instance ID programmatically.

To obtain the Grafana Cloud AWS Logs Ingest URL and Loki Username/Instance ID programmatically, enter the following commands in bash:

Bash ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```bash
STACK_INFO=$(curl -s -H "Authorization: Bearer $ACCESS_POLICY_TOKEN" \
  "https://grafana.com/api/instances/$STACK_SLUG")

read LOKI_URL LOGS_USER <<< $(echo "$STACK_INFO" | jq -r '[.hlInstanceUrl, (.hlInstanceId | tostring)] | @tsv')
LOKI_HOST=${LOKI_URL#https://}
DOMAIN=${LOKI_HOST#*.}
LOKI_CELL_ID=${LOKI_HOST%%.*}
echo "Grafana Cloud AWS Logs Ingest URL: https://aws-${LOKI_CELL_ID}.${DOMAIN}/aws-logs/api/v1/push"
echo "Loki instance ID: ${LOGS_USER}"
```

> Note
> 
> If this endpoint does not work, your Grafana Cloud Provider API endpoint may be using the new URL format. For more information on Grafana Cloud URLs, refer to [Determine Grafana Cloud URLs based on region](/docs/grafana-cloud/security-and-account-management/region-url-formats/).

## Set up required authentication

When Grafana Cloud receives logs from AWS, the access policy and its associated authentication token enables Grafana to:

- Authenticate the request.
- Determine which customer the data is coming from so Grafana can store it appropriately in Loki.

To create an access policy for your organization in the Grafana Cloud portal, refer to the [Create an access policy for an organization or multiple stacks steps](/docs/grafana-cloud/security-and-account-management/authentication-and-permissions/access-policies/create-access-policies/#create-an-access-policy-for-an-organization-or-multiple-stacks).

1. In step 8, in the **Scopes** section, for logs select **Write** to allow logs to write to your account.
2. After you create the policy, click **Add token** to generate a token to authenticate the provider with the Cloud Provider API.
3. Give your token an appropriate name and select an Expiration date. We recommend you select a specific expiration date and **do not** set the **Expiration date** to **No expiry**, as this can create a security vulnerability.
4. Click **Copy to clipboard** and save the new token. In a future step, you replace LOGS\_WRITE\_TOKEN with this token.

### Set up on AWS account

You must create some infrastructure in your AWS account for sending the logs to Grafana Cloud:

- Authentication components
- A component to route logs into the delivery stream
- The Data Firehose delivery stream

[Infrastructure in AWS account](/media/docs/grafana-cloud/aws/diagram-aws-firehose-customer-resources.png)

You can create this infrastructure with Terraform or CloudFormation.

#### Set up with Terraform

1. Download the Terraform [snippet file](https://storage.googleapis.com/cloud-onboarding/aws-logs-beta/onboarding.tf).
2. Open the snippet file, and complete the sections labeled with `FILLME` as shown in the following example:
   
   terraform ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```terraform
   provider "aws" {
   // FILLME: AWS region
   region = "us-east-2"
   
   // FILLME: local AWS profile to use
   profile = "test-profile"
   }
   ```
3. Run `terraform apply`, including the required variables as shown in the following example:
   
   Bash ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```bash
   terraform apply \
       -var="fallback_bucket_name=<Name for an s3 bucket, to save data in case of failures>" \
       -var="firehose_stream_name=<Kinesis stream name>" \
       -var="target_endpoint=<Target AWS Logs endpoint provided by Grafana>" \
       -var="logs_instance_id=<Loki User>" \
       -var="logs_w
   ```
   
   - `fallback_bucket_name`: The name of an S3 bucket where logs can be stored in case the delivery fails
   - `firehose_stream_name`: Enter a meaningful name for the Data Firehose stream
   - `target_endpoint`: The target endpoint URL for your Grafana Cloud instance. Add the prefix `aws-`. For example, if your Loki URL is `https://logs-prod3.grafana.net`, then your Logs with Firehose URL will be `https://aws-logs-prod3.grafana.net/aws-logs/api/v1/push`.
   - `logs_instance_id`: The numeric value for User field of the Loki data source
   - `logs_write_token`: The token you created for Data Firehose

#### Set up with CloudFormation

1. Download the CloudFormation [snippet file](https://grafanalabs-cf-templates.s3.us-east-2.amazonaws.com/aws-logs/aws-logs-firehose.yaml).
2. Run the following `aws cloudformation create-stack`, including the required variables:
   
   Bash ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```bash
   aws cloudformation create-stack --stack-name grafana-aws-logs-firehose --template-body file://./aws-logs-firehose.yaml --capabilities CAPABILITY_NAMED_IAM \
   --parameters \
       ParameterKey=FallbackS3BucketName,ParameterValue=aws-logs-fallback \
       ParameterKey=FirehoseStreamName,ParameterValue=grafana-aws-logs \
       ParameterKey=TargetEndpoint,ParameterValue=TARGET_ENDPOINT \
       ParameterKey=LogsInstanceID,ParameterValue=LOKI_USER \
       ParameterKey=LogsWriteToken,ParameterValue=LOGS_WRITE_TOKEN \
       ParameterKey=SubscriptionFilter,ParameterValue=subscriptionFilter
   ```
   
   - `FallbackS3BucketName`: The name of an S3 bucket where logs can be stored in case the delivery fails
   - `FirehoseStreamName`: Enter a meaningful name for the Data Firehose stream
   - `TargetEndpoint`: The target endpoint URL for your Grafana Cloud instance. Add the prefix `aws-`. For example, if your Loki URL is `https://logs-prod3.grafana.net`, then your Logs with Firehose URL will be `https://aws-logs-prod3.grafana.net/aws-logs/api/v1/push`.
   - `LogsInstanceID`: The numeric value for User field of the Loki data source
   - `LogsWriteToken`: The token you created for Data Firehose
   - `SubscriptionFilter`: Name of the subscription filter from where to pull logs from.

The CloudFormation template is designed to ship logs from a single Amazon CloudWatch LogGroup. Since a single Amazon Data Firehose Delivery Stream can be used for sending logs from many CloudWatch LogGroups, we suggest you edit the template and define as many Subscription Filters as needed, referencing the same delivery stream.

## Set up CloudWatch subscription filter

The CloudWatch subscription filter:

- Reads logs from a select CloudWatch log group.
- Optionally filters the logs.
- Sends the logs to the Data Firehose stream.

### Configure with AWS CLI

Use the following example to create a subscription filter with the AWS CLI:

Bash ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```bash
aws logs put-subscription-filter \
  --log-group-name "<log group name to send logs from>" \
  --filter-name "<Name of the subscription filters>" \
  --filter-pattern "<Optional filter expression>" \
  --destination-arn "<ARN of the Kinesis firehose stream created above>" \
  --role-arn "<ARN of the IAM role created for sending logs above>"
```

### Configure with Terraform

Use the following example to configure an AWS subscription filter with Terraform, and include the required variables:

terraform ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```terraform
resource "aws_cloudwatch_log_subscription_filter" "filter" {
  name            = "filter_name"
  role_arn        = aws_iam_role.logs.arn
  log_group_name  = "/aws/lambda/example_lambda_name"
  filter_pattern  = "" // Optional: Filter expression
  destination_arn = aws_kinesis_firehose_delivery_stream.main.arn
  distribution    = "ByLogStream"
}
```

- `name`: Enter a meaningful name for the ARN
- `role_arn`: ARN of the IAM role created in previous step for sending logs
- `log_group_name`: The log group where the logs should be sent from
- `filter_pattern`: An optional [filter expression](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html)
- `destination_arn`: Using ARN of Firehose delivery stream created in previous snippet

### Configure with CloudFormation

Use the following example to configure an AWS subscription filter with CloudFormation, and include the required variables:

Bash ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```bash
SubscriptionFilter:
  Type: AWS::Logs::SubscriptionFilter
  Properties:
    DestinationArn: "<Firehose delivery stream ARN>"
    FilterPattern: ""
    LogGroupName: "/aws/lambda/test-lambda"
    RoleArn: "<IAM Role for sending logs ARN, created in the steps above>"
```

- `FilterPattern`: An optional [filter expression](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html)
- `LogGroupName`: The log group the logs should be sent from
- `DestinationArn`: ARN of the Data Firehose delivery stream, created in the previous steps
- `RoleArn`: ARN of the IAM role for sending logs, created in the previous steps

## Set static labels in Terraform

Setting static labels offers you an additional way to filter and group your metrics in Grafana Cloud. Set static labels using the [`X-Amz-Firehose-Common-Attributes`](https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html) header. Configure the `request_configuration` block, with a `common_attributes` sub-block of the Amazon Data Firehose delivery stream configuration, as in the following example:

tf ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```tf
http_endpoint_configuration {
  url        = var.target_endpoint
  name       = "Grafana AWS Logs Destination"
  access_key = format("%s:%s", var.logs_instance_id, var.logs_write_token)

  // Buffer incoming data to the specified size, in MBs, before delivering it to the destination
  buffering_size = 1

  // Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination
  //
  // Setting to 1 minute to keep a low enough latency between log production and actual time they are processed in Loki
  buffering_interval = 60

  role_arn       = aws_iam_role.firehose.arn
  s3_backup_mode = "FailedDataOnly"

  request_configuration {
    content_encoding = "GZIP"
  }

    common_attributes {
      name  = "lbl_testname"
      value = "testvalue"
    }
    common_attributes {
      name  = "lbl_testname2" \\ static label names must be previxed with `lbl_`
      value = "testvalue2" \\ static label names and values must be compatible with the Prometheus data model specification
    }
  ...
}
```

Label names **must** be prefixed with `lbl_` and the label names and values **must** be compatible with the [Prometheus data model specification](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels).

When you query in Grafana for the static labels, do not include the `lbl_` prefix, as in the following example query:

![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```none
{job=~"cloud/aws/.+", label1="value`", label2="value2"}
```
