Slide 8 of 10

GCP logs - Alloy + Pub/Sub

How it works

Complexity: Moderate | Infrastructure: Pub/Sub + Alloy | Latency: Streaming

GCP logs collection pipeline

Trade-offs

ProsCons
Reliable Pub/Sub deliveryPub/Sub costs
Filter at sink (reduce volume)Alloy infrastructure
Workload identitySetup complexity
Full processing pipeline

Documentation

View the full documentation. Learning path coming soon!

GCP logs

Script

For GCP, log collection looks a bit different. There’s no serverless Lambda-equivalent approach here.

The standard architecture uses log sinks, Pub/Sub, and Alloy working together.

Here’s how it flows: you create a log sink in Cloud Logging with filters to select which logs you want. That sink routes logs to a Pub/Sub topic. Pub/Sub is Google’s managed messaging service. Then Alloy subscribes to that Pub/Sub topic, receives the logs, processes them, and forwards to Loki.

Why this architecture? Pub/Sub provides reliable, scalable message delivery. Log sink filters let you reduce volume at the source. Only export what you actually need. And Alloy gives you the full processing pipeline for transformation and enrichment.

Yes, it’s more infrastructure than the serverless options for AWS and Azure. But once you set it up, which you can template with Terraform, it scales beautifully. This is how organizations successfully get GCP logs into Grafana Cloud.