Monitor Data Flow with Prometheus and Grafana Cloud
The open source project Data Flow from Spring Cloud provides a Prometheus exporter so that you can scrape metrics from your server at any one given point in time. To get scalable, long-term metrics storage, configure your Prometheus instance to push scraped samples to compatible remote storage endpoints such as Grafana Cloud’s serverless Mimir database using using
remote_write. View the example below for how to configure
remote_write in the
prometheus.yml config file for Data Flow.
Prometheus Configuration for Data Flow
prometheus.yml config file for Data Flow can be found at the following location:
Push metrics from Data Flow to Grafana Cloud using
Below is an example
remote_write code snippet you will need to add to your
prometheus.yml config file in order to forward your metrics from your local Prometheus instance to remote storage with Grafana Cloud:
remote_write: - url: <Your Metrics instance remote_write endpoint> basic_auth: username: <Your Metrics instance ID> password: <Your Cloud Access Policy token>
You can find the
https://...grafana.net/api/prom/push URL, username, and password for your metrics endpoint in the Cloud Portal. Click your stack ‘Details’ then select the Prometheus ‘Detail’. The details display on the right. Need an account? Create a new Grafana Cloud account for free.
Learn more about the available
remote_write configuration parameters from the Prometheus.io Configuration Docs.
Control the number of metric data points pushed to Grafana Cloud:
Every Grafana Cloud account includes 10,000 active series metric storage, free-forever.
Grafana Cloud provides additional 1 data point per minute (DPM) resolution for $8 per 1,000 series. You can adjust the total DPM you push to Grafana Cloud by adjusting the
scrape_interval settings in your
prometheus.yml config file. Learn more about how to adjust your Prometheus scrape interval.