Kubernetes metricsConfiguring remote_write with a Prometheus ConfigMap

Configuring remote_write with a Prometheus ConfigMap

In this guide you’ll learn how to configure Prometheus to ship scraped samples to Grafana Cloud using Prometheus’s remote_write feature.

This guide assumes you have Prometheus installed and running in your cluster, configured using a Kubernetes ConfigMap. To configure a Prometheus Operator, kube-prometheus, or Helm installation of Prometheus, please see the relevant guide from Configuring Prometheus remote_write for Kubernetes deployments.

Prerequisites

Before you begin, you should have the following available to you:

  • A Kubernetes >=1.16.0 cluster.
  • A Grafana Cloud Standard account. To learn how to create an account, please see Grafana Cloud Quickstarts.
  • A Grafana Cloud API key with the MetricsPublisher role. To learn how to create a Grafana Cloud API key, please see Create a Grafana Cloud API key.
  • The kubectl command-line tool installed on your local machine and configured to connect to your cluster. You can read more about installing kubectl in the official documentation.
  • The Prometheus monitoring system installed and running in your cluster as a Deployment, configured using a ConfigMap. Installing Prometheus goes beyond the scope of this guide, but to learn how to install Prometheus Operator in your cluster, please see the Installing Prometheus Operator with Grafana Cloud for Kubernetes quickstart guide. Prometheus Operator abstracts away much of Prometheus’s configuration and management overhead.

Step 1. Modify Prometheus ConfigMap

Begin by locating your Grafana Cloud Metrics username and password.

You can find your username by navigating to your stack in the Cloud Portal and clicking Details next to the Prometheus panel.

Your password corresponds to an API key that you can generate by clicking on Generate now in this same panel. To learn how to create a Grafana Cloud API key, please see Create a Grafana Cloud API key.

Once you’ve noted your username and password, inject it into your Prometheus configuration file by modifying the Kubernetes ConfigMap resource containing Prometheus’s configuration. Unfortunately Prometheus does not support pulling environment variables from the execution environment so we can’t readily use a Kubernetes Secret object in this case. A safer, more elegant solution using Secrets or envsubst goes beyond the scope of this guide.

Locate the ConfigMap manifest and open it in your favorite editor. It should look something like the following:

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus
  namespace: monitoring
data:
  prometheus.yml: |-
    global:
      scrape_interval: 5s
      evaluation_interval: 5s
    rule_files:
      - /etc/prometheus/prometheus.rules
    alerting:
      alertmanagers:
      - scheme: http
        static_configs:
        - targets:
          - "alertmanager.monitoring.svc:9093"

. . .

This ConfigMap is installed in the monitoring Namespace, where the Prometheus Deployment should also be running.

Modify this ConfigMap by adding the following remote_write configuration block:

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus_v2
  namespace: monitoring
data:
  prometheus.yml: |-
    global:
      scrape_interval: 5s
      evaluation_interval: 5s

. . .

    remote_write:
    - url: <Your Metrics instance remote_write endpoint>
      basic_auth:
        username: <your_grafana_cloud_metrics_username>
        password: <your_grafana_cloud_metrics_password>

You can find the /api/prom/push URL, username, and password for your metrics endpoint by clicking on Details in the Prometheus card of the Cloud Portal.

This block creates a default remote_write configuration that ships samples to the Cloud Metrics Prometheus endpoint. It also sets the authorization header on remote_write requests with your Grafana Cloud credentials. To tune the default remote_write parameters, please see Remote Write Tuning from the Prometheus documentation.

Be sure to give the ConfigMap a new versioned name as well, by appending a suffix like _v2.

When you’re done, save and close the file.

In the next step you’ll roll out this updated configuration into your cluster.

Step 2. Update running Prometheus Deployment

To roll out your configuration changes, update the Prometheus Deployment with the new versioned ConfigMap.

You can roll out configuration changes in Kubernetes clusters in many different ways. The steps in this guide focus on configuring remote_write and are not meant to cover blue-green or production Prometheus rollout scenarios.

Given that in the previous step you assigned the ConfigMap a new name, when you update the ConfigMap reference in the Deployment, Kubernetes will redeploy any Pods using the old ConfigMap.

Open the Prometheus Deployment in a text editor. You should see something like the following:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus-deployment
  labels:
    app: prometheus
spec:
  replicas: 2
  selector:

. . .

  template:
    metadata:
      labels:
        app: prometheus

. . .

    spec:
      containers:
      - name: prometheus
        image: prometheus
        ports:
        - containerPort: 9090
        volumeMounts:
          - name: config-volume
            mountPath: /etc/prometheus/
      volumes:
        - name: config-volume
          configMap:
            name: prometheus

This file may vary depending on how you configured and deployed Prometheus. The above manifest defines a 2-Pod Prometheus Deployment that references a ConfigMap called prometheus. The prometheus.yml key containing Prometheus’s configuration is mounted to /etc/prometheus/prometheus.yml.

To update this Deployment, change the ConfigMap:

. . .
      volumes:
        - name: config-volume
          configMap:
            name: prometheus_v2

We update the configMap’s name field from prometheus to prometheus_v2 to reference the new ConfigMap defined in the previous step.

When you’re done, save and close the file.

Roll out the changes using kubectl apply -f:

kubectl apply -f <your_prometheus_deployment_manifest>.yaml

Step 3. Check your work

At this point, you’ve configured Prometheus to remote_write scraped metrics to Grafana Cloud. You can verify that your running Prometheus instance is remote_writing correctly using port-forward.

First, get the Prometheus server Service name:

kubectl get svc

Next, use port-forward to forward a local port to the Prometheus Service:

kubectl --namespace monitoring port-forward svc/<prometheus-service-name> 9090:80

Replace monitoring with the appropriate namespace, and <prometheus-service-name> with the name of the Prometheus service.

Navigate to http://localhost:9090 in your browser, and then Status and Configuration. Verify that the remote_write block you created above has propagated to your running Prometheus instance configuration.

Finally, log in to your Grafana instance to begin querying your cluster data. You can use the Billing/Usage dashboard to inspect incoming data rates in the last 5 minutes to confirm the flow of data to Grafana Cloud.