Menu
Grafana Cloud

Migrate a Kube-Prometheus Helm stack to Grafana Cloud

In this guide, you’ll learn how to:

  • Install the Kube-Prometheus stack Helm chart into a Kubernetes (K8s) cluster using the Helm package manager.
  • Configure your local Prometheus instance to ship metrics to Grafana Cloud using remote_write.
  • Import the Kube-Prometheus Grafana Dashboards into your managed Grafana instance.
  • Import the Kube-Prometheus recording and alerting rules into your Cloud Prometheus instance (optional).
  • Limit which metrics you ship from your local cluster to reduce your active series usage (optional).
  • Turn off local stack components like Grafana and Alertmanager (optional).
  • Enable multi-cluster support for the Kube-Prometheus rules and dashboards (optional).

By following the steps in this guide, you’ll set up the Kube-Prometheus stack in your Kubernetes cluster and configure it to ship its core set of metrics to Grafana Cloud for long-term storage, querying, visualization, and alerting. You’ll also migrate the stack’s core assets (dashboards, recording rules, and alerting rules) to Grafana Cloud to leverage its scalability, availability, and efficient performance and reduce load on your local Prometheus instances.

Note: You might also want to ship metrics to Grafana Cloud using Grafana Agent, a lightweight telemetry collector based on Prometheus that only performs the scraping and remote_write functions. To get started with Grafana Agent and Grafana Cloud, see Kubernetes Monitoring. Kubernetes Monitoring bundles a set of prebuilt dashboards and preconfigured Kubernetes manifests to deploy Agent into your cluster(s). You can find additonal deployment manifests for Grafana Agent in its GitHub repository. See the Agent documentation for more information.

Before you begin

Before you begin, you should have the following available:

  • A Kubernetes cluster with role-based access control (RBAC) enabled.
  • A Grafana Cloud Pro account or trial. To create an account, see Grafana Cloud. You can use a Free tier account with this guide if you import 10 or fewer dashboards, 100 or fewer rules, and keep your metrics usage under 10,000 active series. Otherwise, a Pro tier account is necessary if you need to import more dashboards, rules, and metrics from Kube-Prometheus.
  • The kubectl command-line tool installed on your local machine, configured to connect to your cluster. You can read more about installing kubectl in the official documentation.
  • The helm Kubernetes package manager installed on your local machine. To learn how to install Helm, see Installing Helm.

Install the Kube-Prometheus stack into your cluster

In this section, you’ll use Helm to install the Kube-Prometheus stack into your K8s cluster.

The Kube-Prometheus stacks installs the following observability components:

In addition, Helm and Kube-Prometheus preconfigure these components to scrape several endpoints in your cluster by default, like the cadvisor, kubelet, and node-exporter /metrics endpoints on Kubernetes Nodes, the Kubernetes API server metrics endpoint, and kube-state-metrics endpoints, among others. To see a full list of configured scrape targets, see the Kube-Prometheus Helm chart’s values.yaml. You can find scrape targets by searching for serviceMonitor objects. Configuring the Kube-Prometheus stack’s scrape targets goes beyond the scope of this guide, but to learn more, see the ServiceMonitor spec in the Prometheus Operator GitHub repo.

The Kube-Prometheus stack also provisions several monitoring mixins. A mixin is a collection of prebuilt Grafana dashboards, Prometheus recording rules, and Prometheus alerting rules. In particular, it includes:

Mixins are written in Jsonnet, a data templating language, and generate JSON dashboard files and rules YAML files. Configuring and modifying the underlying mixins goes beyond the scope of this guide; they are imported as-is into Grafana Cloud. To learn more, see Generate config files from the monitoring-mixins repo and Grizzly, a tool for working with Jsonnet-defined assets against the Grafana Cloud API. Note that Grizzly is currently in alpha.

To install the Kube-Prometheus stack into your cluster:

  1. Add the prometheus-community Helm repo and update Helm:

    helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
    helm repo update
    
  2. Install the kube-prometheus-stack chart using the following Helm command, replacing foo with your desired release name:

    helm install foo prometheus-community/kube-prometheus-stack
    

    Note that this command installs the Kube-Prometheus stack into the default Namespace. To modify this, use a values.yaml file to override the defaults or pass in a --set flag. To learn more, see Values Files from the Helm docs.

    Once Helm has finished installing the chart, you should see the following:

    NAME: foo
    LAST DEPLOYED: Fri Jun 25 15:30:30 2021
    NAMESPACE: default
    STATUS: deployed
    REVISION: 1
    NOTES:
    kube-prometheus-stack has been installed. Check its status by running:
      kubectl --namespace default get pods -l "release=foo"
    
    Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
    
  3. Use kubectl to inspect what’s been installed into the cluster:

    kubectl get pod
    
    alertmanager-foo-kube-prometheus-stack-alertmanager-0   2/2     Running   0          7m3s
    foo-grafana-8547c9db6-vp8pf                             2/2     Running   0          7m6s
    foo-kube-prometheus-stack-operator-6888bf88f9-26c42     1/1     Running   0          7m6s
    foo-kube-state-metrics-76fbc7d6ff-vj872                 1/1     Running   0          7m6s
    foo-prometheus-node-exporter-8qbrz                      1/1     Running   0          7m6s
    foo-prometheus-node-exporter-d4dk4                      1/1     Running   0          7m6s
    foo-prometheus-node-exporter-xplv4                      1/1     Running   0          7m6s
    prometheus-foo-kube-prometheus-stack-prometheus-0       2/2     Running   1          7m3s
    

    This example shows Alertmanager, Grafana, Prometheus Operator, kube-state-metrics, node-exporter, and Prometheus running in the cluster. In addition to these Pods, the stack installs several Kubernetes custom resources (CRDs).

  4. To see the Kubernetes custom resources, run kubectl get crd.

  5. To access your Prometheus instance, use the kubectl port-forward command to forward a local port into the cluster:

    kubectl port-forward svc/foo-kube-prometheus-stack-prometheus 9090
    

    Replace foo-kube-prometheus-stack-prometheus with the appropriate service name.

  6. Enter http://localhost:9090 in your browser.

    You should see the Prometheus web interface. Click Status and then Targets to see a list of pre-configured scrape targets.

    You can use a similar procedure to access the Grafana and Alertmanager web interfaces.

Now that you’ve installed the stack in your cluster, you can begin shipping scraped metrics to Grafana Cloud.

Ship metrics to Grafana Cloud

In this section, you’ll configure Prometheus to ship scraped metrics to Grafana Cloud.

Active Series Warning: Shipping your Kubernetes Prometheus metrics to Grafana Cloud using remote_write can result in a signficant increase in your active series usage and monthly bill. To estimate the number of series you’ll ship, go to the Prometheus web UI in your cluster, then click on Status, and TSDB Status to see your Prometheus instance’s stats. Number of series describes the rough number of active series you’ll be shipping to Grafana Cloud. In a later step, you’ll configure Prometheus to drop many of these to control your active series usage. Since you are only billed at the 95th percentile of active series usage, temporary spikes should not result in any cost increase. To learn more, see 95th percentile billing.

You’ll configure Prometheus using the remoteWrite configuration section of the Helm chart’s values.yaml file. You’ll then update the release using helm upgrade.

To ship metrics to Grafana Cloud:

  1. Create a Kubernetes Secret to store your Grafana Cloud Prometheus username and password.

    You can find your username by navigating to your stack in the Cloud Portal and clicking Details next to the Prometheus panel.

    Your password corresponds to an API key that you can generate by clicking on Generate now in this same panel. To learn how to create a Grafana Cloud API key, see Create a Grafana Cloud API key.

    You can create a Secret by using a manifest file or create it directly using kubectl. In this guide you’ll create it directly using kubectl. To learn more about Kubernetes Secrets, consult Secrets from the Kubernetes docs.

    Run the following command to create a Secret called kubepromsecret:

    kubectl create secret generic kubepromsecret \
      --from-literal=username=<your_grafana_cloud_prometheus_username>\
      --from-literal=password='<your_grafana_cloud_API_key>'\
      -n default
    

    If you deployed your monitoring stack in a namespace other than default, change the -n default flag to the appropriate namespace in the above command. To learn more about this command, see Managing Secrets using kubectl from the official Kubernetes docs.

  2. Create a Helm values file named values.yaml in an editor and paste in the snippet below. The snippet defines Prometheus’s remote_write configuration and applies the new configuration to the Kube-Prometheus release.

    prometheus:
      prometheusSpec:
        remoteWrite:
        - url: "<Your Cloud Prometheus instance remote_write endpoint>"
          basicAuth:
              username:
                name: kubepromsecret
                key: username
              password:
                name: kubepromsecret
                key: password
        replicaExternalLabelName: "__replica__"
        externalLabels: {cluster: "test"}
    

    The Helm values file lets you set configuration variables that are passed in to Helm’s chart templates. To see the default values file for Kube-Prometheus stack, see values.yaml from the Kube-Prometheus stack’s GitHub repository.

    The snippet sets the remote_write URL and basic_auth username and password using the Secret created in the previous step.

    The snippet also configures two additional parameters: replicaExternalLabelName and externalLabels.

    Replace test with an appropriate name for your Kubernetes cluster. Prometheus adds the cluster: test and __replica__: prometheus-foo-kube-prometheus-stack-prometheus-0 labels to any samples shipped to Grafana Cloud.

    Configuring these parameters enables automatic metric deduplication in Grafana Cloud, so that you can spin up additional Prometheus instances in a high-availability configuration without storing duplicate samples in your Grafana Cloud Prometheus instance. To learn more, see Sending data from multiple high-availability Prometheus instances.

    If you’re shipping data from multiple Kubernetes clusters, setting the cluster external label also identifies the source cluster and takes advantage of multi-cluster support in many of the Kube-Prometheus dashboards, recording rules, and alerting rules.

  3. When you’re done editing the file, save and close it.

  4. Roll out the changes with helm upgrade:

    helm upgrade -f values.yaml your_release_name prometheus-community/kube-prometheus-stack
    

    Replace your_release_name with the name of the release you used to install Kube-Prometheus. You can get a list of installed releases using helm list.

  5. Once the changes have been rolled out, use port-forward to navigate to the Prometheus UI:

    kubectl port-forward svc/foo-kube-prometheus-stack-prometheus 9090
    
  6. Navigate to http://localhost:9090 in your browser, and then click Status and Configuration. Verify that the remote_write block you appended above has propagated to your running Prometheus instance.

  7. Log in to your managed Grafana instance to begin querying your cluster data. You can use the Billing/Usage dashboard to inspect incoming data rates in the last 5 minutes to confirm the flow of data to Grafana Cloud.

    To learn more about the difference between Active Series and DPM, see Active series and DPM for billing calculations.

Now that you’re shipping metrics to Grafana Cloud and have configured the appropriate external labels, you’re ready to import your Kube-Prometheus dashboards into your hosted Grafana instance.

Import Dashboards

In this section, you’ll import the prebuilt Kube-Prometheus dashboards from your local Grafana instance into your managed Grafana instance.

Note: To learn how to enable multi-cluster support for Kube-Prometheus dashboards, see Enable multi-cluster support.

This guide uses Grafana’s HTTP API to bulk export and import dashboards, which you can also do using Grafana’s Web UI. You’ll use a lightweight bash script to perform the dump and load. Note that the script does not preserve folder hierarchy and naively downloads all dashboards from a source Grafana instance and uploads them to a target Grafana instance.

To import dashboards:

  1. Navigate to Exporting and importing dashboards to hosted Grafana using the HTTP API and save the bash script into a file called dash_migrate.sh.

  2. Create a temporary directory called temp_dir:

    mkdir temp_dir
    
  3. Make the script executable:

    chmod +x dash_migrate.sh
    
  4. Forward a local port to the Grafana service running in your cluster.

    kubectl port-forward svc/foo-grafana 8080:80
    

    Replace foo-grafana with the name of the Grafana service. You can find this using kubectl get svc.

  5. With a port forwarded, log in to your Grafana instance by visiting http://localhost:8080 and entering admin as the username and the value configured for the adminPassword parameter.

    If you did not modify this value, you can find the default in the values.yaml file.

  6. Create an API key by clicking on the cog in the left-hand navigation menu, and then API keys.

  7. Note down the API key and local Grafana URL, and fill in the variables at the top of the bash script with the appropriate values:

    SOURCE_GRAFANA_ENDPOINT='http://localhost:8080'
    SOURCE_GRAFANA_API_KEY='your_api_key_here'
    . . .
    
  8. Repeat this process for your hosted Grafana instance, which you can access by navigating to the Cloud Portal. Click on Details next to your stack, and then Log In in the Grafana card. Ensure the API key has the Admin role. Once you’ve noted the endpoint URL and API key, modify the remaining values in the bash script:

    . . .
    DEST_GRAFANA_API_KEY='your_hosted_grafana_api_key_here'
    DEST_GRAFANA_ENDPOINT='https://your_stack_name.grafana.net'
    TEMP_DIR=temp_dir
    
  9. Save and close the file.

  10. Run the script:

    ./dash_migrate.sh -ei
    

    The -e flag exports all dashboards from the source Grafana and saves them in temp_dir, and the -i flag imports the dashboards in temp_dir into the destination Grafana instance.

  11. Now that you’ve imported the Kube Prometheus dashboards, navigate to your managed Grafana instance, click on Dashboards in the left-hand nav, and then Manage. From here you can access the default Kube Prometheus dashboards that you’ve just imported.

There are several open-source tools that can help you manage dashboards with Grafana using the HTTP API. One tool is Grizzly, which also allows you to work directly with the Jsonnet source used to generate the Kube-Prometheus stack configuration, as well as the generated JSON dashboard files. You can also use the Grafana Terraform provider.

Note that this guide uses the Helm version of the Kube-Prometheus stack, which templates manifest files generated from the underlying Kube-Prometheus project.

Disable local components (optional)

Now that you’ve imported the Kube Prometheus dashboards to Grafana Cloud, you might want to shut down some of the stack’s components locally. In this section, you’ll turn off the following Kube Prometheus components:

  • Alertmanager, given that Grafana Cloud provisions a hosted Alertmanager instance integrated into the Grafana UI
  • Grafana

To disable Alertmanager and Grafana:

  1. Add the following to your values.yaml Helm configuration file:

    grafana:
      enabled: false
    alertmanager:
      enabled: false
    
  2. Roll out the changes with helm upgrade:

    helm upgrade -f values.yaml your_release_name prometheus-community/kube-prometheus-stack
    

See Disable local Prometheus rules evaluation to learn how to disable recording and alerting rule evaluation.

Summary

At this point, you’ve rolled out the Kube-Prometheus stack in your cluster using Helm, configured Prometheus to remote_write metrics to Grafana Cloud for long-term storage and efficient querying, and have migrated Kube-Prometheus’s core set of dashboards to Grafana Cloud. Your Grafana Cloud dashboards will now query your Grafana Cloud Prometheus data source directly. Note that your cluster-local Prometheus instance continues to evaluate alerting rules and recording rules. You can optionally migrate these by following the steps in Import recording and alerting rules.

By default, Kube Prometheus scrapes almost every available endpoint in your cluster, shipping tens of thousands (possibly hundreds of thousands) of active series to Grafana Cloud. To configure Prometheus to ship only the metrics referenced in the dashboards you’ve just uploaded, see Reduce your Prometheus active series usage. You will lose long-term retention for these series, however, they will still be available locally for Prometheus’s default configured retention period.