Getting StartedCloud Metrics and Logs

Store, Query, and Alert on Data

Grafana Cloud Metrics and Logs gives you a centralized, high-performance, long-term data store for your metrics and logging data. Endpoints for Prometheus, Graphite, and Loki let you ship data from multiple sources to Grafana Cloud, where you can then build dashboards that aggregate, query, and alert on data across all of these sources. Grafana Cloud Metrics and Logs offers blazing fast query performance tuned and optimized by Cortex and Loki maintainers, and horizontally scalable alerting and rule evaluation with Grafana Cloud Alerting.

Get started with Grafana Cloud Metrics and Logs:

  • If you have existing Prometheus, Graphite and/or Loki instances:

    • Ship your Prometheus series to Grafana Cloud: Using Prometheus’s remote_write feature, you can ship copies of scraped samples to your Grafana Cloud Prometheus metrics service. To learn how to enable remote_write, please see Metrics — Prometheus from the Grafana Cloud docs. If you’re using Helm to manage Prometheus, configure remote_write using the Helm chart’s values file. Please see Values files from the Helm docs for more information on configuring Helm charts.

    • Ship your Graphite metrics to Grafana Cloud: carbon-relay-ng allows you to aggregate, filter and route your Graphite metrics to Grafana Cloud. To learn how to configure a carbon-relay-ng instance in your local environment to ship Graphite data to Grafana Cloud, please see How to Stream Graphite Metrics to Grafana Cloud using carbon-relay-ng.

    • Ship your Loki logs to Grafana Cloud: The Loki log aggregation stack uses Promtail as the agent that ships logs to either a Loki instance or Grafana Cloud. To learn how to install and configure Promtail to ship logs to Grafana Cloud, navigate to Send Logs in the Loki section of the Grafana Cloud Portal. There, you’ll find instructions for installing and configuring Promtail in both Kubernetes clusters and on standalone hosts.

    • Trace program execution information with Tempo: The Tempo tracing service is currently in beta. The Tempo documentation is available for early adopters to test it out. More documentation is coming for cloud-specific aspects once Tempo is officially released.

  • If you’re starting from scratch:

    • Install and configure Prometheus: Prometheus scrapes, stores, ships, and alerts on metrics collected from one or more monitoring targets. Using its remote_write feature, you can then ship these collected samples to a remote endpoint like Grafana Cloud for long-term storage and aggregation. To learn how to install Prometheus, please see Installation from the Prometheus documentation. Prometheus relies on exporters to expose Prometheus-style metrics for systems in your environment. For example, Node exporter exports hardware and OS metrics for *NIX systems. To get started with exporters, please see Exporters and Integrations.

    • Deploy the Grafana Cloud Agent: The Grafana Cloud Agent is a lightweight, push-style, Prometheus-based metrics and log data collector. It is a pared-down version of Prometheus without any querying or local storage and can reduce the scraper’s memory footprint by up to 40% relative to a Prometheus instance. With the Grafana Cloud Agent, you can avoid maintaining and scaling a Prometheus instance in your environment, and split up collection workloads across Nodes in your fleet. It currently supports Prometheus metrics, Promtail-style log collection, everything needed for metrics collection using currently-available Integrations. To learn more about configuring and deploying the Grafana Cloud Agent, please see Running Grafana Cloud Agent from the Grafana Cloud Agent GitHub repository.