This is documentation for the next version of Grafana Alloy Documentation. For the latest stable release, go to the latest version.
otelcol.processor.metric_start_time
otelcol.processor.metric_start_time accepts metrics from other otelcol components and sets the start time for cumulative metric datapoints which do not already have a start time.
This processor is commonly used with otelcol.receiver.prometheus, which produces metric points without a start time.
Note
otelcol.processor.metric_start_timeis a wrapper over the upstream OpenTelemetry Collectormetricstarttimeprocessor. Bug reports or feature requests will be redirected to the upstream repository, if necessary.
You can specify multiple otelcol.processor.metric_start_time components by giving them different labels.
Usage
otelcol.processor.metric_start_time "<LABEL>" {
output {
metrics = [...]
}
}Arguments
You can use the following arguments with otelcol.processor.metric_start_time:
Strategies
The strategy argument determines how the processor handles missing start times for cumulative metrics. Valid values are:
true_reset_point (default)
Produces a stream of points that starts with a True Reset Point. The true reset point has its start time set to its end timestamp, indicating the absolute value of the cumulative point when the collector first observed it. Subsequent points reuse the start timestamp of the initial true reset point.
Pros:
- The absolute value of the cumulative metric is preserved.
- It is possible to calculate the correct rate between any two points since the timestamps and values are not modified.
Cons:
- This strategy is stateful because the initial True Reset point is necessary to properly calculate rates on subsequent points.
- The True Reset point doesn’t make sense semantically. It has a zero duration, but non-zero values.
- Many backends reject points with equal start and end timestamps.
- If the True Reset point is rejected, the next point will appear to have a very large rate.
Example transformation:
subtract_initial_point
Drops the first point in a cumulative series, subtracting that point’s value from subsequent points and using the initial point’s timestamp as the start timestamp for subsequent points.
Pros:
- Cumulative semantics are preserved. This means that for a point with a given
[start, end]interval, the cumulative value occurred in that interval. - Rates over resulting timeseries are correct, even if points are lost. This strategy is not stateful.
Cons:
- The absolute value of counters is modified. This is generally not an issue, since counters are usually used to compute rates.
- The initial point is dropped, which loses information.
Example transformation:
start_time_metric
Looks for the process_start_time metric (or a metric matching start_time_metric_regex) and uses its value as the start time for all other cumulative points in the batch of metrics.
If the start time metric is not found, it falls back to the time at which the collector started.
This strategy should only be used in limited circumstances:
- When your application has a metric with the start time in Unix seconds, such as
process_start_time_seconds. - The processor is used before any batching, so that the batch of metrics all originate from a single application.
- This strategy can be used when the collector is run as a sidecar to the application, where the collector’s start time is a good approximation of the application’s start time.
Cons:
- If the collector’s start time is used as a fallback and the collector restarts, it can produce rates that are incorrect and higher than expected.
- The process’ start time isn’t the time at which individual instruments or timeseries are initialized. It may result in lower rates if the first observation is significantly later than the process’ start time.
Example transformation:
Given a process_start_time_seconds metric with value T0:
Garbage collection
The gc_interval argument defines how often to check if any resources have not emitted data since the last check.
If a resource hasn’t emitted any data, it’s removed from the cache to free up memory.
Any additional data from resources removed from the cache will be given a new start time.
Blocks
You can use the following blocks with otelcol.processor.metric_start_time:
output
RequiredThe output block configures a set of components to forward resulting telemetry data to.
The following arguments are supported:
You must specify the output block, but all its arguments are optional.
By default, telemetry data is dropped.
Configure the metrics, logs, and traces arguments accordingly to send telemetry data to other components.
debug_metrics
The debug_metrics block configures the metrics that this component generates to monitor its state.
The following arguments are supported:
disable_high_cardinality_metrics is the Alloy equivalent to the telemetry.disableHighCardinalityMetrics feature gate in the OpenTelemetry Collector.
It removes attributes that could cause high cardinality metrics.
For example, attributes with IP addresses and port numbers in metrics about HTTP and gRPC connections are removed.
Note
If configured,
disable_high_cardinality_metricsonly applies tootelcol.exporter.*andotelcol.receiver.*components.
Exported fields
The following fields are exported and can be referenced by other components:
input accepts otelcol.Consumer data for metrics.
Component health
otelcol.processor.metric_start_time is only reported as unhealthy if given an invalid configuration.
Debug information
otelcol.processor.metric_start_time doesn’t expose any component-specific debug information.
Examples
Basic usage with default strategy
This example uses the default true_reset_point strategy to set start times for Prometheus metrics:
otelcol.receiver.prometheus "default" {
output {
metrics = [otelcol.processor.metric_start_time.default.input]
}
}
otelcol.processor.metric_start_time "default" {
output {
metrics = [otelcol.exporter.otlp.production.input]
}
}
otelcol.exporter.otlp "production" {
client {
endpoint = sys.env("OTLP_SERVER_ENDPOINT")
}
}Using subtract_initial_point strategy
This example uses the subtract_initial_point strategy, which preserves cumulative semantics and produces correct rates:
otelcol.receiver.prometheus "default" {
output {
metrics = [otelcol.processor.metric_start_time.default.input]
}
}
otelcol.processor.metric_start_time "default" {
strategy = "subtract_initial_point"
output {
metrics = [otelcol.exporter.otlp.production.input]
}
}
otelcol.exporter.otlp "production" {
client {
endpoint = sys.env("OTLP_SERVER_ENDPOINT")
}
}Using start_time_metric strategy with custom regex
This example uses the start_time_metric strategy with a custom regex to find the start time metric:
otelcol.receiver.prometheus "default" {
output {
metrics = [otelcol.processor.metric_start_time.default.input]
}
}
otelcol.processor.metric_start_time "default" {
strategy = "start_time_metric"
gc_interval = "1h"
start_time_metric_regex = "^.+_start_time$"
output {
metrics = [otelcol.exporter.otlp.production.input]
}
}
otelcol.exporter.otlp "production" {
client {
endpoint = sys.env("OTLP_SERVER_ENDPOINT")
}
}Compatible components
otelcol.processor.metric_start_time can accept arguments from the following components:
- Components that export OpenTelemetry
otelcol.Consumer
otelcol.processor.metric_start_time has exports that can be consumed by the following components:
- Components that consume OpenTelemetry
otelcol.Consumer
Note
Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details.



