Grafana Cloud

Monitor Darkroom with Prometheus and Grafana Cloud

The open source project Darkroom from Gojek provides a Prometheus exporter so that you can aggregate, scrape, and push metrics to a Prometheus-compatible database. To store your Darkroom application’s Prometheus metrics in a scalable, long-term remote storage service such as Grafana Cloud’s fully-managed Mimir database, follow one of the methods below.

Send Prometheus metrics from Darkroom to Grafana Cloud

Choose one of the following methods for pushing your metrics to Grafana Cloud.

Need a Grafana Cloud account? Create a new account for free.

Set up an agentless scrape job with the Metrics Endpoint integration

  1. Navigate to your Grafana Cloud instance
  2. In your Grafana Cloud instance, use the left-side navigation to get to the Connections Console (Home > Connections > Add new connection).
  3. Search “Metrics Endpoint” to find the Metrics Endpoint configuration page.
  4. Enter your Scrape Job URL (Prometheus endpoint), choose between Bearer and Basic under “Type of Authentication Credentials”, and test your connection using the Test Connection button.
  5. Click the “Save Scrape Job” button to submit your new agentless scrape job.

That’s it! Scrapes will be automatically performed at the target scrape job URL every 60 seconds.

Learn more about how to get started with Metrics Endpoint in Grafana Cloud

Deploy a Grafana Agent scraping service

If you are already using Grafana Agent in Flow mode, you may benefit from creating a new prometheus.scrape component that will let you compose advanced Prometheus scrape configurations and easily reuse scrape configurations—all managed in one place.

Examples of advanced scrape job arguments include settings for scrape job name, scrape interval, sample limit, label limit, proxy url, and configuration blocks for oAuth authorization.

View tutorial on collecting Prometheus metrics with Grafana Agent

Push metrics using Prometheus remote write

Below is an example remote write code snippet. You will need to add this remote_write block to your prometheus.yml config file in order to forward your metrics from your local Prometheus instance to fully-managed remote storage with Grafana Cloud:

- url: <Your Metrics instance remote_write endpoint>
    username: <Your Metrics instance ID>
    password: <Your Cloud Access Policy token>

You can find the URL, username, and password for your metrics endpoint in the Cloud Portal. Click your stack ‘Details’ then select the Prometheus ‘Details’. The target URL and basic auth username display towards the middle of the page.

Can’t find your Prometheus config file?

The out-of-the-box prometheus.yml config file for Darkroom can be found at the following location:

Additional remote_write documentation

Visit the Configuration Docs to view additional remote_write configuration settings.

Control your Grafana Cloud metrics expense

Every Grafana Cloud account includes 10,000 active series metric storage, free-forever. To reduce your billable metrics usage, we recommend following the guidelines below.

Reduce the number of metric data points pushed to Grafana Cloud

Grafana Cloud provides 1 data point per minute (DPM) resolution for $8 per 1,000 series. You can adjust the total DPM you push to Grafana Cloud by adjusting the scrape_interval settings in your prometheus.yml config file.

Learn more about how to adjust your Prometheus scrape interval.

Identify and eliminate unused metric data

We designed Adaptive Metrics so that you can easily aggregate metrics that are unused. By applying our aggregation rules, you will block unused time series data at the time of ingestion while keeping the metric name and labels so that you can discover them if needed in the future. Adaptive Metrics is available to all Grafana Cloud users for no additional cost.

Learn more about how to use Adaptive Metrics.