---
title: "Migrate a Kube-Prometheus Helm stack to Grafana Cloud | Grafana Cloud documentation"
description: "How to migrate a Kube-Prometheus Helm stack to send metrics to Grafana Cloud"
---

# Migrate a Kube-Prometheus Helm stack to Grafana Cloud

With the following instructions, you set up the Kube-Prometheus stack in your Kubernetes Cluster, then configure it to send its core set of metrics to Grafana Cloud for long-term storage, querying, visualization, and alerting. You can also migrate the stack’s core assets (recording rules and alerting rules) to Grafana Cloud. This uses Grafana Cloud’s scalability, availability, and efficient performance, as well as reduces load on your local Prometheus instances.

> Note
> 
> Consider sending metrics to Grafana Cloud using [Grafana Alloy](/alloy/introduction/), an open source OpenTelemetry collector with built-in Prometheus pipelines and support for metrics, logs, traces, and profiles. To get started with Alloy and Grafana Cloud, refer to [configuration for Kubernetes Monitoring](/docs/grafana-cloud/monitor-infrastructure/kubernetes-monitoring/configuration/config-k8s-helmchart/). Kubernetes Monitoring bundles a set of preconfigured Kubernetes manifests to deploy Alloy into your Clusters.

Migrate a Kube-Prometheus Helm stack and send metrics to Grafana Cloud with these steps:

- Install the [Kube-Prometheus](https://github.com/prometheus-operator/kube-prometheus) stack [Helm chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) into a Kubernetes Cluster using the [Helm](https://helm.sh/) package manager.
- Configure your local Prometheus instance to send metrics to Grafana Cloud using [`remote_write`](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write).

Optionally, you can complete any of these steps:

- Import the Kube-Prometheus recording and alerting rules into your Cloud Prometheus instance.
- Limit which metrics you send from your local Cluster to reduce your active series usage.
- Turn off local stack components such as Grafana and Alertmanager.

## Before you begin

Before you begin, have the following available:

- A Kubernetes Cluster with [role-based access control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) (RBAC) enabled
- A Grafana Cloud **Pro** account or trial. To create an account, refer to [Grafana Cloud](/pricing/). You can use a free tier account with these instructions if you meet the conditions detailed on the web page. Otherwise, a Cloud Pro account is necessary to import more dashboards, rules, and metrics from Kube-Prometheus.
- The `kubectl` command-line tool installed on your local machine, configured to connect to your Cluster. For more about installing `kubectl`, refer to [the official documentation](https://kubernetes.io/docs/tasks/tools/).
- The `helm` Kubernetes package manager installed on your local machine. To install Helm, refer to [Installing Helm](https://helm.sh/docs/intro/install/).

## Install the Kube-Prometheus stack into your Cluster

Use Helm to install the Kube-Prometheus stack into your Kubernetes Cluster. The Kube-Prometheus stacks installs the following observability components:

- [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator)
- Highly-available [Prometheus](https://prometheus.io/) (with 1 replica by default)
- Highly-available [Alertmanager](https://github.com/prometheus/alertmanager) (with 1 replica by default)
- [Prometheus `node_exporter`](https://github.com/prometheus/node_exporter)
- [Prometheus Adapter for Kubernetes Metrics APIs](https://github.com/DirectXMan12/k8s-prometheus-adapter)
- [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics)
- [Grafana](/)

The Kube-Prometheus stack scrapes several endpoints in your Cluster by default, such as:

- `cadvisor`
- `kubelet`
- `node-exporter` `/metrics` endpoints on Kubernetes Nodes
- Kubernetes API server metrics endpoint
- `kube-state-metrics` endpoints

To get a full list of configured scrape targets, refer to the Kube-Prometheus Helm chart’s [`values.yaml`](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml). To find scrape targets, search for `serviceMonitor` objects. Configuration of the Kube-Prometheus stack’s scrape targets is beyond the scope of these instructions. To learn more, refer to the `ServiceMonitor` spec in the [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator) GitHub repository.

The Kube-Prometheus stack also provisions several monitoring [mixins](https://github.com/monitoring-mixins/docs). A mixin is a collection of prebuilt Grafana dashboards, Prometheus recording rules, and Prometheus alerting rules. In particular, it includes:

- The [kubernetes-mixin](https://github.com/kubernetes-monitoring/kubernetes-mixin), which includes several useful dashboards and alerts for monitoring Kubernetes Clusters and their workloads
- The [Node Mixin](https://github.com/prometheus/node_exporter/tree/master/docs/node-mixin), which does the same for `node_exporter` metrics
- The [Prometheus Mixin](https://github.com/prometheus/prometheus/tree/main/documentation/prometheus-mixin)

Mixins are written in [Jsonnet](https://jsonnet.org/), a data templating language. They generate JSON dashboard files and rules YAML files. Configuration and modification of the underlying mixins goes beyond the scope of these instructions. Mixins are imported as-is into Grafana Cloud. To learn more, refer to:

- [Generate config files](https://github.com/monitoring-mixins/docs#generate-config-files)
- [Grizzly](https://github.com/grafana/grizzly), a tool for working with Jsonnet-defined assets against the Grafana Cloud API.

To install the Kube-Prometheus stack into your Cluster:

1. Add the `prometheus-community` Helm repository and update Helm:
   
   ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```none
   helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
   helm repo update
   ```
2. Install the `kube-prometheus-stack` chart using the following Helm command, replacing `foo` with your desired release name:
   
   ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```none
   helm install foo prometheus-community/kube-prometheus-stack
   ```
   
   > Note
   > 
   > Note that this command installs the Kube-Prometheus stack into the `default` Namespace. To modify this, use a `values.yaml` file to override the defaults or pass in a `--set` flag. To learn more, refer to [Values Files](https://helm.sh/docs/chart_template_guide/values_files/).
   
   After Helm has finished installing the chart, you should see the following:
   
   ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```none
   NAME: foo
   LAST DEPLOYED: Fri Jun 25 15:30:30 2021
   NAMESPACE: default
   STATUS: deployed
   REVISION: 1
   NOTES:
   kube-prometheus-stack has been installed. Check its status by running:
     kubectl --namespace default get pods -l "release=foo"
   
   Refer to https://github.com/prometheus-operator/kube-prometheus for instructions on how to create and configure Alertmanager and Prometheus instances using the Operator.
   ```
3. Use `kubectl` to inspect what is installed in the Cluster:
   
   ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```none
   kubectl get pod
   ```
   
   ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```none
   alertmanager-foo-kube-prometheus-stack-alertmanager-0   2/2     Running   0          7m3s
   foo-grafana-8547c9db6-vp8pf                             2/2     Running   0          7m6s
   foo-kube-prometheus-stack-operator-6888bf88f9-26c42     1/1     Running   0          7m6s
   foo-kube-state-metrics-76fbc7d6ff-vj872                 1/1     Running   0          7m6s
   foo-prometheus-node-exporter-8qbrz                      1/1     Running   0          7m6s
   foo-prometheus-node-exporter-d4dk4                      1/1     Running   0          7m6s
   foo-prometheus-node-exporter-xplv4                      1/1     Running   0          7m6s
   prometheus-foo-kube-prometheus-stack-prometheus-0       2/2     Running   1          7m3s
   ```
   
   This example shows Alertmanager, Grafana, Prometheus Operator, kube-state-metrics, node-exporter, and Prometheus running in the Cluster. In addition to these Pods, the stack installs several [Kubernetes custom resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) (CRDs).
4. To see the Kubernetes custom resources, run `kubectl get crd`.
5. To access your Prometheus instance, use the `kubectl port-forward` command to forward a local port into the Cluster:
   
   ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```none
   kubectl port-forward svc/foo-kube-prometheus-stack-prometheus 9090
   ```
   
   Replace `foo-kube-prometheus-stack-prometheus` with the appropriate service name.
6. Enter `http://localhost:9090` in your browser.
   
   You should see the Prometheus web interface. Click **Status**, then **Targets** to see a list of preconfigured scrape targets. You can use a similar procedure to access the Grafana and Alertmanager web interfaces.

## Send metrics to Grafana Cloud

Configure Prometheus to send scraped metrics to Grafana Cloud.

> Warning
> 
> When you send your Kubernetes Prometheus metrics to Grafana Cloud using `remote_write`, this can result in a significant increase in your active series usage and monthly bill. To estimate the number of series you will be sending, go to the Prometheus web UI in your Cluster. Click **Status**, then **TSDB Status** to see your Prometheus instance’s statistics. **Number of series** describes the rough number of active series you’ll be sending to Grafana Cloud. In a later step, you can configure Prometheus to drop many of these to control your active series usage. Since you are only billed at the 95th percentile of active series usage, temporary spikes should not result in any cost increase. To learn more, refer to [95th percentile billing](/docs/grafana-cloud/cost-management-and-billing/understand-your-invoice/metrics-invoice/#95th-percentile-billing).

Configure Prometheus using the [remoteWrite](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml#L2938) configuration section of the Helm chart’s `values.yaml` file. Then update the release using `helm upgrade`.

To send metrics to Grafana Cloud:

1. Create a Kubernetes Secret to store your Grafana Cloud Prometheus username and password.
   
   To find your username, navigate to your stack in the Cloud portal, and click **Details** next to the Prometheus panel.
   
   Your password corresponds to a Cloud Access Policy token that you can generate by clicking on **Generate now** in this same panel. To create a Cloud Access Policy, refer to [Create a Grafana Cloud Access Policy](/docs/grafana-cloud/security-and-account-management/authentication-and-permissions/access-policies/create-access-policies/).
   
   You can create a Secret by using a manifest file or create it directly using `kubectl`. In these instructions, you create it directly using `kubectl`. To learn more about Kubernetes Secrets, consult [Secrets](https://kubernetes.io/docs/concepts/configuration/secret/).
   
   Run the following command to create a Secret called `kubepromsecret`:
   
   ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```none
   kubectl create secret generic kubepromsecret \
     --from-literal=username=<your_grafana_cloud_prometheus_username>\
     --from-literal=password='<your_grafana_cloud_access_policy_token>'\
     -n default
   ```
   
   If you deployed your monitoring stack in a namespace other than `default`, change the `-n default` flag to the appropriate namespace in the above command. To learn more about this command, refer to [Managing Secrets using kubectl](https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/).
2. Create a Helm values file named `values.yaml` in an editor, and paste in the snippet below. The snippet defines Prometheus’ `remote_write` configuration and applies the new configuration to the Kube-Prometheus release.
   
   ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```none
   prometheus:
     prometheusSpec:
       remoteWrite:
       - url: "<Your Cloud Prometheus instance remote_write endpoint>"
         basicAuth:
             username:
               name: kubepromsecret
               key: username
             password:
               name: kubepromsecret
               key: password
       replicaExternalLabelName: "__replica__"
       externalLabels: {cluster: "test"}
   ```
   
   The Helm values file lets you set configuration variables that are passed in to Helm’s chart templates. To see the default values file for Kube-Prometheus stack, refer to [`values.yaml`](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml).
   
   The snippet:
   
   - Sets the `remote_write` URL and `basic_auth` username and password using the Secret created in the previous step
   - Configures two additional parameters: `replicaExternalLabelName` and `externalLabels`
   
   Replace `test` with an appropriate name for your Kubernetes Cluster. Prometheus adds the `cluster: test` and `__replica__: prometheus-foo-kube-prometheus-stack-prometheus-0` labels to any samples sent to Grafana Cloud.
   
   When you configure these parameters, you enable automatic metric deduplication in Grafana Cloud. This means you can create additional Prometheus instances in a high-availability configuration without storing duplicate samples in your Grafana Cloud Prometheus instance. To learn more, refer to [Sending data from multiple high-availability Prometheus instances](/docs/grafana-cloud/send-data/metrics/metrics-prometheus/#sending-data-from-multiple-high-availability-prometheus-instances).
   
   If you are sending data from multiple Kubernetes Clusters, set the `cluster` external label to identify the source Cluster. This takes advantage of multi-Cluster support in many of the Kube-Prometheus dashboards, recording rules, and alerting rules.
3. Save and close the file.
4. Apply the changes with `helm upgrade`:
   
   ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```none
   helm upgrade -f values.yaml your_release_name prometheus-community/kube-prometheus-stack
   ```
   
   Replace `your_release_name` with the name of the release you used to install Kube-Prometheus. You can get a list of installed releases using `helm list`.
5. After the changes have been applied, use `port-forward` to navigate to the Prometheus UI:
   
   ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```none
   kubectl port-forward svc/foo-kube-prometheus-stack-prometheus 9090
   ```
6. Navigate to `http://localhost:9090` in your browser, and then click **Status** and **Configuration**. Verify that the `remote_write` block you appended above has propagated to your running Prometheus instance.
7. Log in to your managed Grafana instance to begin querying your Cluster data. You can use the **Billing/Usage** dashboard to inspect incoming data rates in the last five minutes to confirm the flow of data to Grafana Cloud.
   
   For more about the difference between Active Series and DPM, refer to [Active series and DPM for billing calculations](/docs/grafana-cloud/cost-management-and-billing/understand-your-invoice/metrics-invoice/).

## Import Dashboards

Now that you are sending metrics to Grafana Cloud and have configured the appropriate external labels, you can import your Kube-Prometheus dashboards into your hosted Grafana instance. Import the prebuilt Kube-Prometheus dashboards from your local Grafana instance into your managed Grafana instance.

> Note
> 
> To enable multi-Cluster support for Kube-Prometheus dashboards, refer to [Enable multi-Cluster support](/docs/grafana-cloud/monitor-infrastructure/kubernetes-monitoring/configuration/config-other-methods/helm-operator-migration/multi_cluster/).

These steps use Grafana’s [HTTP API](/docs/grafana-cloud/developer-resources/api-reference/http-api/) to bulk export and import dashboards, which you can also do using Grafana’s Web UI. You use a lightweight bash script to perform the dump and load. Note that the script does not preserve folder hierarchy. It naively downloads all dashboards from a source Grafana instance and uploads them to a target Grafana instance.

To import dashboards:

01. Navigate to [Exporting and importing dashboards to hosted Grafana using the HTTP API](/docs/grafana-cloud/introduction/find-and-use-dashboards/#exporting-and-importing-dashboards-to-managed-grafana-using-the-http-api), and save the bash script into a file called `dash_migrate.sh`.
02. Create a temporary directory called `temp_dir`:
    
    ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
    
    ```none
    mkdir temp_dir
    ```
03. Make the script executable:
    
    ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
    
    ```none
    chmod +x dash_migrate.sh
    ```
04. Forward a local port to the Grafana service running in your Cluster.
    
    ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
    
    ```none
    kubectl port-forward svc/foo-grafana 8080:80
    ```
    
    Replace `foo-grafana` with the name of the Grafana service. You can find this using `kubectl get svc`.
05. With a port forwarded, to log in to your Grafana instance, visit `http://localhost:8080` and enter `admin` as the username and the value configured for the `adminPassword` parameter.
    
    If you did not modify this value, you can find the default in the [`values.yaml` file](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml#L628).
06. To create an API key, click the cog in the left-hand navigation menu, and then click **API keys**.
07. Note the API key and local Grafana URL, and complete the variables at the top of the bash script with the appropriate values:
    
    ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
    
    ```none
    SOURCE_GRAFANA_ENDPOINT='http://localhost:8080'
    SOURCE_GRAFANA_API_KEY='your_api_key_here'
    . . .
    ```
08. Repeat this process for your hosted Grafana instance.
    
    To access the instance, navigate to your Cloud portal. Click **Details** next to your stack, then click **Log In** in the Grafana card. Ensure the API key has the **Admin** role. After noting the endpoint URL and API key, modify the remaining values in the bash script:
    
    ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
    
    ```none
    . . .
    DEST_GRAFANA_API_KEY='your_hosted_grafana_api_key_here'
    DEST_GRAFANA_ENDPOINT='https://your_stack_name.grafana.net'
    TEMP_DIR=temp_dir
    ```
09. Save and close the file.
10. Run the script:
    
    ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
    
    ```none
    ./dash_migrate.sh -ei
    ```
    
    The `-e` flag exports all dashboards from the source Grafana and saves them in `temp_dir`, and the `-i` flag imports the dashboards in `temp_dir` into the destination Grafana instance.
11. Navigate to your managed Grafana instance, and click **Dashboards** in the left-hand nav, then **Manage**. From here you can access the default Kube Prometheus dashboards that you just imported.

The following open-source tools can help you manage dashboards with Grafana using the HTTP API:

- [Grizzly](https://github.com/grafana/grizzly): Also allows you to work directly with the [Jsonnet source](https://github.com/prometheus-operator/kube-prometheus) used to generate the Kube-Prometheus stack configuration, as well as the generated JSON dashboard files.
- Grafana [Terraform provider](https://registry.terraform.io/providers/grafana/grafana/latest/docs)

Note that these instructions use the Helm version of the Kube-Prometheus stack, which templates manifest files generated from the underlying [Kube-Prometheus](https://github.com/prometheus-operator/kube-prometheus) project.

## Disable local components (optional)

After importing the Kube Prometheus dashboards to Grafana Cloud, you might want to shut down some of the stack’s components locally. In this section, you turn off the following Kube Prometheus components:

- Alertmanager, given that Grafana Cloud provisions a hosted Alertmanager instance integrated into the Grafana UI
- Grafana

To disable Alertmanager and Grafana:

1. Add the following to your `values.yaml` Helm configuration file:
   
   ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```none
   grafana:
     enabled: false
   alertmanager:
     enabled: false
   ```
2. Apply the changes with `helm upgrade`:
   
   ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```none
   helm upgrade -f values.yaml your_release_name prometheus-community/kube-prometheus-stack
   ```

Refer to [Disable local Prometheus rules evaluation](/docs/grafana-cloud/monitor-infrastructure/kubernetes-monitoring/configuration/config-other-methods/helm-operator-migration/import_rules/#disable-local-prometheus-rules-evaluation) to learn how to disable recording and alerting rule evaluation.

## Next steps

Your Cluster-local Prometheus instance continues to evaluate alerting rules and recording rules. You can optionally migrate these by following the steps in [Import recording and alerting rules](/docs/grafana-cloud/monitor-infrastructure/kubernetes-monitoring/configuration/config-other-methods/helm-operator-migration/import_rules/).

By default, Kube Prometheus scrapes almost every available endpoint in your Cluster, which sends tens of thousands (possibly hundreds of thousands) of active series to Grafana Cloud.

To configure Prometheus to send only the metrics referenced in the dashboards you just uploaded, refer to [Reduce your Prometheus active series usage](/docs/grafana-cloud/monitor-infrastructure/kubernetes-monitoring/configuration/config-other-methods/helm-operator-migration/reduce_usage/). You lose long-term retention for these series, however, they are still be available locally for Prometheus’ default configured retention period.
