---
title: "Snowflake integration | Grafana Cloud documentation"
description: "Learn about Snowflake Grafana Cloud integration."
---

# Snowflake integration for Grafana Cloud

Snowflake is a cloud data platform that is designed to connect businesses globally, across any type or scale of data and many different workloads, and unlock seamless data collaboration. The Snowflake integration uses Grafana Alloy to collect metrics for monitoring a Snowflake account, including aspects such as credit usage, storage usage, and login success rates. Accompanying dashboards are provided to visualize these metrics.

This integration supports metrics provided by v0.0.1 of the Snowflake exporter, which is integrated into Grafana Alloy.

This integration includes 6 useful alerts and 2 pre-built dashboards to help monitor and visualize Snowflake metrics.

## Before you begin

In order to scrape Snowflake metrics, you must use a user configured with the `ACCOUNTADMIN` role, or a custom role that has access to the `SNOWFLAKE.ACCOUNT_USAGE` schema. [See the Snowflake documentation](https://docs.snowflake.com/en/sql-reference/account-usage.html#enabling-snowflake-database-usage-for-other-roles) for instructions on how to enable other roles to access the `SNOWFLAKE.ACCOUNT_USAGE` schema.

## Install Snowflake integration for Grafana Cloud

1. In your Grafana Cloud stack, click **Connections** in the left-hand menu.
2. Find **Snowflake** and click its tile to open the integration.
3. Review the prerequisites in the **Configuration Details** tab and set up Grafana Alloy to send Snowflake metrics to your Grafana Cloud instance.
4. Click **Install** to add this integration’s pre-built dashboards and alerts to your Grafana Cloud instance, and you can start monitoring your Snowflake setup.

## Configuration snippets for Grafana Alloy

### Simple mode

These snippets are configured to scrape a single Snowflake instance running locally with default ports.

First, **manually** copy and append the following snippets into your alloy configuration file.

### Integrations snippets

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
prometheus.exporter.snowflake "integrations_snowflake" {
	account_name = "SNOWFLAKE_ACCOUNT"
	username     = "SNOWFLAKE_USERNAME"
	password     = "SNOWFLAKE_PASSWORD"
	warehouse    = "SNOWFLAKE_WAREHOUSE"
}

discovery.relabel "integrations_snowflake" {
	targets = prometheus.exporter.snowflake.integrations_snowflake.targets

	rule {
		target_label = "instance"
		replacement  = constants.hostname
	}

	rule {
		target_label = "job"
		replacement  = "integrations/snowflake"
	}
}

prometheus.scrape "integrations_snowflake" {
	targets         = discovery.relabel.integrations_snowflake.output
	forward_to      = [prometheus.remote_write.metrics_service.receiver]
	job_name        = "integrations/snowflake"
	scrape_interval = "30m0s"
	scrape_timeout  = "1m0s"
}
```

### Advanced mode

The following snippets provide examples to guide you through the configuration process.

To instruct Grafana Alloy to scrape your Snowflake instances, **manually** copy and append the snippets to your alloy configuration file, then follow subsequent instructions.

### Advanced integrations snippets

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
prometheus.exporter.snowflake "integrations_snowflake" {
	account_name = "SNOWFLAKE_ACCOUNT"
	username     = "SNOWFLAKE_USERNAME"
	password     = "SNOWFLAKE_PASSWORD"
	warehouse    = "SNOWFLAKE_WAREHOUSE"
}

discovery.relabel "integrations_snowflake" {
	targets = prometheus.exporter.snowflake.integrations_snowflake.targets

	rule {
		target_label = "instance"
		replacement  = constants.hostname
	}

	rule {
		target_label = "job"
		replacement  = "integrations/snowflake"
	}
}

prometheus.scrape "integrations_snowflake" {
	targets         = discovery.relabel.integrations_snowflake.output
	forward_to      = [prometheus.remote_write.metrics_service.receiver]
	job_name        = "integrations/snowflake"
	scrape_interval = "30m0s"
	scrape_timeout  = "1m0s"
}
```

This integrations uses the [prometheus.exporter.snowflake](/docs/alloy/latest/reference/components/prometheus.exporter.snowflake/) component to generate metrics from a Snowflake instance.

You must provide account details and credentials to scrape snowflake, including the `account_name` (in the form of `[organization]-[account]`, e.g. `aaaaaaa-bb12345`), `username`, some form of authentication, `warehouse`, and `role` if the user is not configured with the `ACCOUNTADMIN` role. For password authentication, include a `password`. For RSA key-pair authentication, a `private_key_path` is required and `private_key_password` is required for encrypted keys.

The optional flag `exclude_deleted_tables = true` will exclude tables that have been deleted when querying Snowflake, which can dramatically improve processing time for larger environments. As a result, panels tracking table failsafe and time travel data may under-report, as they will only show the amount of data deleted from active tables.

For the full array of configuration options, refer to the [prometheus.exporter.snowflake](/docs/alloy/latest/reference/components/prometheus.exporter.snowflake/) component reference documentation.

This exporter must be linked with a [discovery.relabel](/docs/alloy/latest/reference/components/discovery.relabel/) component to apply the necessary relabelings.

For each Snowflake instance to be monitored you must create a pair of these components.

Configure the following properties within each `discovery.relabel` component:

- `instance` label: `constants.hostname` sets the `instance` label to your Grafana Alloy server hostname. If that is not suitable, change it to a value uniquely identifies this Snowflake instance.

You can then scrape them by including each `discovery.relabel` under `targets` within the [prometheus.scrape](/docs/alloy/latest/reference/components/prometheus.scrape/) component.

By default, the `scrape_interval` is set to 30 minutes due to Snowflake’s large metric bucket time frames, but this interval may be reduced if desired.

## Dashboards

The Snowflake integration installs the following dashboards in your Grafana Cloud instance to help monitor your system.

- Snowflake data ownership
- Snowflake overview

**Snowflake overview dashboard (1/3).**

**Snowflake overview dashboard (2/3).**

**Snowflake overview dashboard (3/3).**

## Alerts

The Snowflake integration includes the following useful alerts:

Expand table

| Alert                                   | Description                                                                 |
|-----------------------------------------|-----------------------------------------------------------------------------|
| SnowflakeWarnHighLoginFailures          | Warning: Large login failure rate.                                          |
| SnowflakeWarnHighComputeCreditUsage     | Warning: Compute credit usage is within 20% of the configured limit.        |
| SnowflakeCriticalHighComputeCreditUsage | Critical: Compute credit usage is over the configured limit.                |
| SnowflakeWarnHighServiceCreditUsage     | Warning: Cloud services credit usage is within 20% of the configured limit. |
| SnowflakeCriticalHighServiceCreditUsage | Critical: Compute credit usage is over the configured limit.                |
| SnowflakeDown                           | Warning: Snowflake exporter failed to scrape.                               |

## Metrics

The most important metrics provided by the Snowflake integration, which are used on the pre-built dashboards and Prometheus alerts, are as follows:

- snowflake\_auto\_clustering\_credits
- snowflake\_failed\_login\_rate
- snowflake\_failsafe\_bytes
- snowflake\_login\_rate
- snowflake\_stage\_bytes
- snowflake\_storage\_bytes
- snowflake\_successful\_login\_rate
- snowflake\_table\_active\_bytes
- snowflake\_table\_clone\_bytes
- snowflake\_table\_deleted\_tables
- snowflake\_table\_failsafe\_bytes
- snowflake\_table\_time\_travel\_bytes
- snowflake\_up
- snowflake\_used\_cloud\_services\_credits
- snowflake\_used\_compute\_credits
- snowflake\_warehouse\_blocked\_queries
- snowflake\_warehouse\_executed\_queries
- snowflake\_warehouse\_overloaded\_queue\_size
- snowflake\_warehouse\_provisioning\_queue\_size
- snowflake\_warehouse\_used\_cloud\_service\_credits
- snowflake\_warehouse\_used\_compute\_credits
- up

## Changelog

md ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```md
# 1.0.1 - January 2025

* Update integration instructions to provide detail on configuring optional RSA key-pair authentication in the Snowflake exporter

# 1.0.0 - October 2024

* Adds a panel to track deleted tables
* Updates documentation for new exporter option to exclude deleted tables
* Bump version to 1.0.0

# 0.0.2 - September 2023

- New Filter Metrics option for configuring the Grafana Agent, which saves on metrics cost by dropping any metric not used by this integration. Beware that anything custom built using metrics that are not on the snippet will stop working.
- New hostname relabel option, which applies the instance name you write on the text box to the Grafana Agent configuration snippets, making it easier and less error prone to configure this mandatory label.

# 0.0.1 - January 2023

- Initial release
```

## Cost

By connecting your Snowflake instance to Grafana Cloud, you might incur charges. To view information on the number of active series that your Grafana Cloud account uses for metrics included in each Cloud tier, see [Active series and dpm usage](/docs/grafana-cloud/fundamentals/active-series-and-dpm/) and [Cloud tier pricing](/products/cloud/pricing/).
