Slide 3 of 12

AWS metrics scrape jobs

How it works

AWS scrape jobs architecture: Grafana Cloud pulls metrics from CloudWatch API

Complexity: Simple | Infrastructure: None | Latency: Minutes

Trade-offs

ProsCons
Configure in Grafana Cloud UI1-5 minute latency
No AWS infrastructure neededCloudWatch API costs
Covers 60+ AWS servicesAPI rate limits at scale
Automatic service discoveryCross-account needs IAM setup
Converts to PromQL

When to use

  • Quick start and evaluation
  • Proof-of-concept setups
  • Small environments with moderate metric volume

For production: Consider metric streams for better cost profile, lower latency, and easier maintenance.

Learning path

Configure this approach step by step.

AWS metrics scrape jobs

Script

CloudWatch scrape jobs are the fastest way to get AWS metrics into Grafana Cloud. Perfect for evaluation or quick proof-of-concept setups.

Here’s how it works. You go into the Grafana Cloud UI, set up an AWS integration, and provide IAM credentials, typically a read-only user or role. You select which regions to monitor and which CloudWatch namespaces to scrape: maybe EC2, RDS, Lambda, or all of them.

Grafana Cloud then queries the CloudWatch API on a schedule and pulls those metrics in. They get converted to Prometheus format automatically, so you can query them with PromQL just like any other metrics in Grafana.

The beauty is simplicity: nothing to deploy in your AWS account, no infrastructure to manage.

The trade-offs? Latency. CloudWatch metrics are typically 1-5 minutes old by the time they’re available via API. And at scale, CloudWatch API costs add up and you might hit rate limits.

For production workloads, we recommend metric streams instead. They have a better cost profile, lower latency for alerting, and easier long-term maintenance. But scrape jobs are great for getting started quickly and validating your setup.