This is documentation for the next version of Grafana Alloy Documentation. For the latest stable release, go to the latest version.
otelcol.exporter.googlecloudpubsub
Community: This component is developed, maintained, and supported by the Alloy user community. Grafana doesn’t offer commercial support for this component. To enable and use community components, you must set the
--feature.community-components.enabled
flag totrue
.
otelcol.exporter.googlecloudpubsub
accepts metrics, traces, and logs from other otelcol
components and sends it to Google Cloud Pub/Sub Topic.
Note
otelcol.exporter.googlecloudpubsub
is a wrapper over the upstream OpenTelemetry Collectorgooglecloudpubsub
exporter. Bug reports or feature requests will be redirected to the upstream repository, if necessary.
You can specify multiple otelcol.exporter.googlecloudpubsub
components by giving them different labels.
Usage
otelcol.exporter.googlecloudpubsub "<LABEL>" {
project = "<PROJECT-ID>"
topic = "projects/<PROJECT-ID>/topics/<TOPIC-NAME>"
}
Authenticating
Refer to the Google Cloud Pub/Sub Exporter and Google Cloud Exporter documentation for more detailed information about authentication.
Arguments
You can use the following arguments with otelcol.exporter.googlecloudpubsub
:
Name | Type | Description | Default | Required |
---|---|---|---|---|
topic | string | The topic name to send OTLP data over. The topic name should be a fully qualified resource name, for example, projects/otel-project/topics/otlp . | "" | yes |
compression | string | The compression used on the data sent to the topic. Only gzip is supported. Default is no compression. | "" | no |
endpoint | string | Override the default Pub/Sub endpoint. This is useful when connecting to the Pub/Sub emulator instance or switching between global and regional service endpoints. | "" | no |
insecure | bool | Allows performing insecure SSL connections and transfers. This is useful when connecting to a local emulator instance. Only has effect if you set endpoint . | false | no |
project | string | Google Cloud Platform project identifier. | Fetch from credentials | no |
timeout | Duration | Timeout for calls to the Pub/Sub API. | "12s" | no |
user_agent | string | Override the user agent string on requests to Cloud Monitoring. This only applies to metrics. Specify {{version}} to include the application version number. | "opentelemetry-collector-contrib {{version}}" | no |
Blocks
You can use the following blocks with otelcol.exporter.googlecloudpubsub
:
Block | Description | Required |
---|---|---|
[debug_metrics ][debug_metrics] | Configures the metrics that this component generates to monitor its state. | no |
[ordering ][ordering] | Configures the Pub/Sub ordering feature. | no |
[retry_on_failure ][retry_on_failure] | Configures the retry behavior when the receiver encounters an error downstream in the pipeline. | no |
[sending_queue ][sending_queue] | Configures batching of data before sending. | no |
[watermark ][watermark] | Behaviour of how the ce-time attribute is set. | no |
debug_metrics
The debug_metrics
block configures the metrics that this component generates to monitor its state.
The following arguments are supported:
Name | Type | Description | Default | Required |
---|---|---|---|---|
disable_high_cardinality_metrics | boolean | Whether to disable certain high cardinality metrics. | true | no |
disable_high_cardinality_metrics
is the Alloy equivalent to the telemetry.disableHighCardinalityMetrics
feature gate in the OpenTelemetry Collector.
It removes attributes that could cause high cardinality metrics.
For example, attributes with IP addresses and port numbers in metrics about HTTP and gRPC connections are removed.
Note
If configured,
disable_high_cardinality_metrics
only applies tootelcol.exporter.*
andotelcol.receiver.*
components.
ordering
The following arguments are supported:
Name | Type | Description | Default | Required |
---|---|---|---|---|
enabled | bool | Enables ordering. | false | no |
from_resource_attribute | string | Resource attribute used as the ordering key. Required when enabled is true . If the resource attribute is missing or has an empty value, messages aren’t ordered for this resource. | "" | no |
remove_resource_attribute | string | Whether the ordering key resource attribute specified from_resource_attribute should be removed from the resource attributes. | "" | no |
retry_on_failure
The retry_on_failure
block configures how failed requests to Datadog are retried.
The following arguments are supported:
Name | Type | Description | Default | Required |
---|---|---|---|---|
enabled | boolean | Enables retrying failed requests. | true | no |
initial_interval | duration | Initial time to wait before retrying a failed request. | "5s" | no |
max_elapsed_time | duration | Maximum time to wait before discarding a failed batch. | "5m" | no |
max_interval | duration | Maximum time to wait between retries. | "30s" | no |
multiplier | number | Factor to grow wait time before retrying. | 1.5 | no |
randomization_factor | number | Factor to randomize wait time before retrying. | 0.5 | no |
When enabled
is true
, failed batches are retried after a given interval.
The initial_interval
argument specifies how long to wait before the first retry attempt.
If requests continue to fail, the time to wait before retrying increases by the factor specified by the multiplier
argument, which must be greater than 1.0
.
The max_interval
argument specifies the upper bound of how long to wait between retries.
The randomization_factor
argument is useful for adding jitter between retrying Alloy instances.
If randomization_factor
is greater than 0
, the wait time before retries is multiplied by a random factor in the range [ I - randomization_factor * I, I + randomization_factor * I]
, where I
is the current interval.
If a batch hasn’t been sent successfully, it’s discarded after the time specified by max_elapsed_time
elapses.
If max_elapsed_time
is set to "0s"
, failed requests are retried forever until they succeed.
sending_queue
The sending_queue
block configures an in-memory buffer of batches before data is sent to the HTTP server.
The following arguments are supported:
Name | Type | Description | Default | Required |
---|---|---|---|---|
block_on_overflow | boolean | The behavior when the component’s TotalSize limit is reached. | false | no |
enabled | boolean | Enables a buffer before sending data to the client. | true | no |
num_consumers | number | Number of readers to send batches written to the queue in parallel. | 10 | no |
queue_size | number | Maximum number of unwritten batches allowed in the queue at the same time. | 1000 | no |
sizer | string | How the queue and batching is measured. | "requests" | no |
wait_for_result | boolean | Determines if incoming requests are blocked until the request is processed or not. | false | no |
storage | capsule(otelcol.Handler) | Handler from an otelcol.storage component to use to enable a persistent queue mechanism. | no |
The blocking
argument is deprecated in favor of the block_on_overflow
argument.
When block_on_overflow
is true
, the component will wait for space. Otherwise, operations will immediately return a retryable error.
When enabled
is true
, data is first written to an in-memory buffer before sending it to the configured server.
Batches sent to the component’s input
exported field are added to the buffer as long as the number of unsent batches doesn’t exceed the configured queue_size
.
queue_size
determines how long an endpoint outage is tolerated.
Assuming 100 requests/second, the default queue size 1000
provides about 10 seconds of outage tolerance.
To calculate the correct value for queue_size
, multiply the average number of outgoing requests per second by the time in seconds that outages are tolerated. A very high value can cause Out Of Memory (OOM) kills.
The sizer
argument could be set to:
requests
: number of incoming batches of metrics, logs, traces (the most performant option).items
: number of the smallest parts of each signal (spans, metric data points, log records).bytes
: the size of serialized data in bytes (the least performant option).
The num_consumers
argument controls how many readers read from the buffer and send data in parallel.
Larger values of num_consumers
allow data to be sent more quickly at the expense of increased network traffic.
If an otelcol.storage.*
component is configured and provided in the queue’s storage
argument, the queue uses the
provided storage extension to provide a persistent queue and the queue is no longer stored in memory.
Any data persisted will be processed on startup if Alloy is killed or restarted.
Refer to the
exporterhelper documentation in the OpenTelemetry Collector repository for more details.
watermark
The following arguments are supported:
Name | Type | Description | Default | Required |
---|---|---|---|---|
behavior | string | current sets the ce-time attribute to the system clock, earliest sets the attribute to the smallest timestamp of all the messages. | "" | no |
allow_drift | Duration | The maximum difference the ce-time attribute can have from the system clock. If you set allow_drift to 0s and behavior to earliest , the maximum drift from the clock is allowed. | 0s | no |
Exported fields
The following fields are exported and can be referenced by other components:
Name | Type | Description |
---|---|---|
input | otelcol.Consumer | A value other components can use to send telemetry data to. |
input
accepts otelcol.Consumer
data for any telemetry signal , including metrics, logs, and traces.
Component health
otelcol.exporter.googlecloudpubsub
is only reported as unhealthy if given an invalid configuration.
Debug information
otelcol.exporter.googlecloudpubsub
doesn’t expose any component-specific debug information.
Example
This example scrapes logs from local files through a receiver for conversion to OpenTelemetry format before finally sending them to Pub/Sub.
This configuration includes the recommended memory_limiter
and batch
plugins, which avoid high reporting latency and ensure the collector stays stable by dropping telemetry when memory limits are reached.
local.file_match "logs" {
path_targets = [{
__address__ = "localhost",
__path__ = "/var/log/{syslog,messages,*.log}",
instance = constants.hostname,
job = "integrations/node_exporter",
}]
}
loki.source.file "logs" {
targets = local.file_match.logs.targets
forward_to = [otelcol.receiver.loki.gcp.receiver]
}
otelcol.receiver.loki "gcp" {
output {
logs = [otelcol.processor.memory_limiter.gcp.input]
}
}
otelcol.processor.memory_limiter "gcp" {
check_interval = "1s"
limit = "200MiB"
output {
metrics = [otelcol.processor.batch.gcp.input]
logs = [otelcol.processor.batch.gcp.input]
traces = [otelcol.processor.batch.gcp.input]
}
}
otelcol.processor.batch "gcp" {
output {
metrics = [otelcol.exporter.googlecloudpubsub.default.input]
logs = [otelcol.exporter.googlecloudpubsub.default.input]
traces = [otelcol.exporter.googlecloudpubsub.default.input]
}
}
otelcol.exporter.googlecloudpubsub "default" {
project = "my-gcp-project"
topic = "projects/<my-gcp-project>/topics/my-pubsub-topic"
}
Compatible components
otelcol.exporter.googlecloudpubsub
has exports that can be consumed by the following components:
- Components that consume OpenTelemetry
otelcol.Consumer
Note
Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details.