Menu

This is documentation for the next version of Alloy. For the latest stable release, go to the latest version.

Documentationbreadcrumb arrow Grafana Alloybreadcrumb arrow Referencebreadcrumb arrow Componentsbreadcrumb arrow otelcolbreadcrumb arrow otelcol.exporter.loadbalancing
Open source

otelcol.exporter.loadbalancing

otelcol.exporter.loadbalancing accepts logs and traces from other otelcol components and writes them over the network using the OpenTelemetry Protocol (OTLP) protocol.

Note

otelcol.exporter.loadbalancing is a wrapper over the upstream OpenTelemetry Collector loadbalancing exporter. Bug reports or feature requests will be redirected to the upstream repository, if necessary.

Multiple otelcol.exporter.loadbalancing components can be specified by giving them different labels.

The decision which backend to use depends on the trace ID or the service name. The backend load doesn’t influence the choice. Even though this load-balancer won’t do round-robin balancing of the batches, the load distribution should be very similar among backends, with a standard deviation under 5% at the current configuration.

otelcol.exporter.loadbalancing is especially useful for backends configured with tail-based samplers which choose a backend based on the view of the full trace.

When a list of backends is updated, some of the signals will be rerouted to different backends. Around R/N of the “routes” will be rerouted differently, where:

  • A “route” is either a trace ID or a service name mapped to a certain backend.
  • “R” is the total number of routes.
  • “N” is the total number of backends.

This should be stable enough for most cases, and the larger the number of backends, the less disruption it should cause.

Usage

alloy
otelcol.exporter.loadbalancing "LABEL" {
  resolver {
    ...
  }
  protocol {
    otlp {
      client {}
    }
  }
}

Arguments

otelcol.exporter.loadbalancing supports the following arguments:

NameTypeDescriptionDefaultRequired
routing_keystringRouting strategy for load balancing."traceID"no

The routing_key attribute determines how to route signals across endpoints. Its value could be one of the following:

  • "service": spans, logs, and metrics with the same service.name will be exported to the same backend. This is useful when using processors like the span metrics, so all spans for each service are sent to consistent Alloy instances for metric collection. Otherwise, metrics for the same services would be sent to different instances, making aggregations inaccurate.
  • "traceID": spans and logs belonging to the same traceID will be exported to the same backend.
  • "resource": metrics belonging to the same resource will be exported to the same backend.
  • "metric": metrics with the same name will be exported to the same backend.
  • "streamID": metrics with the same streamID will be exported to the same backend.

The loadbalancer configures the exporter for the signal types supported by the routing_key.

EXPERIMENTAL: Metrics support in otelcol.exporter.loadbalancing is an [experimental][] feature. Experimental features are subject to frequent breaking changes, and may be removed with no equivalent replacement. The stability.level flag must be set to experimental to use the feature.

Blocks

The following blocks are supported inside the definition of otelcol.exporter.loadbalancing:

HierarchyBlockDescriptionRequired
resolverresolverConfigures discovering the endpoints to export to.yes
resolver > staticstaticStatic list of endpoints to export to.no
resolver > dnsdnsDNS-sourced list of endpoints to export to.no
resolver > kuberneteskubernetesKubernetes-sourced list of endpoints to export to.no
resolver > aws_cloud_mapaws_cloud_mapAWS CloudMap-sourced list of endpoints to export to.no
protocolprotocolProtocol settings. Only OTLP is supported at the moment.no
protocol > otlpotlpConfigures an OTLP exporter.no
protocol > otlp > clientclientConfigures the exporter gRPC client.no
protocol > otlp > client > tlstlsConfigures TLS for the gRPC client.no
protocol > otlp > client > keepalivekeepaliveConfigures keepalive settings for the gRPC client.no
protocol > otlp > queuequeueConfigures batching of data before sending.no
protocol > otlp > retryretryConfigures retry mechanism for failed requests.no
debug_metricsdebug_metricsConfigures the metrics that this component generates to monitor its state.no

The > symbol indicates deeper levels of nesting. For example, resolver > static refers to a static block defined inside a resolver block.

resolver block

The resolver block configures how to retrieve the endpoint to which this exporter will send data.

Inside the resolver block, either the dns block or the static block should be specified. If both dns and static are specified, dns takes precedence.

static block

The static block configures a list of endpoints which this exporter will send data to.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
hostnameslist(string)List of endpoints to export to.yes

dns block

The dns block periodically resolves an IP address via the DNS hostname attribute. This IP address and the port specified via the port attribute will then be used by the gRPC exporter as the endpoint to which to export data to.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
hostnamestringDNS hostname to resolve.yes
intervaldurationResolver interval."5s"no
timeoutdurationResolver timeout."1s"no
portstringPort to be used with the IP addresses resolved from the DNS hostname."4317"no

kubernetes block

You can use the kubernetes block to load balance across the pods of a Kubernetes service. The Kubernetes API notifies Alloy whenever a new pod is added or removed from the service. The kubernetes resolver has a much faster response time than the dns resolver because it doesn’t require polling.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
servicestringKubernetes service to resolve.yes
portslist(number)Ports to use with the IP addresses resolved from service.[4317]no
timeoutdurationResolver timeout."1s"no

If no namespace is specified inside service, an attempt will be made to infer the namespace for this Alloy. If this fails, the default namespace will be used.

Each of the ports listed in ports will be used with each of the IPs resolved from service.

The “get”, “list”, and “watch” roles must be granted in Kubernetes for the resolver to work.

aws_cloud_map block

The aws_cloud_map block allows users to use otelcol.exporter.loadbalancing when using ECS over EKS in an AWS infrastructure.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
namespacestringThe CloudMap namespace where the service is registered.yes
service_namestringThe name of the service which was specified when registering the instance.yes
intervaldurationResolver interval."30s"no
timeoutdurationResolver timeout."5s"no
health_statusstringPorts to use with the IP addresses resolved from service."HEALTHY"no
portnumberPort to be used for exporting the traces to the addresses resolved from service.nullno

health_status can be set to either of:

  • HEALTHY: Only return instances that are healthy.
  • UNHEALTHY: Only return instances that are unhealthy.
  • ALL: Return all instances, regardless of their health status.
  • HEALTHY_OR_ELSE_ALL: Returns healthy instances, unless none are reporting a healthy state. In that case, return all instances. This is also called failing open.

If port is not set, a default port defined in CloudMap will be used.

Note

The aws_cloud_map resolver returns a maximum of 100 hosts. A feature request aims cover pagination for this scenario.

protocol block

The protocol block configures protocol-related settings for exporting. At the moment only the OTLP protocol is supported.

otlp block

The otlp block configures OTLP-related settings for exporting.

client block

The client block configures the gRPC client used by the component. The endpoints used by the client block are the ones from the resolver block

The following arguments are supported:

NameTypeDescriptionDefaultRequired
compressionstringCompression mechanism to use for requests."gzip"no
read_buffer_sizestringSize of the read buffer the gRPC client to use for reading server responses.no
write_buffer_sizestringSize of the write buffer the gRPC client to use for writing requests."512KiB"no
wait_for_readybooleanWaits for gRPC connection to be in the READY state before sending data.falseno
headersmap(string)Additional headers to send with the request.{}no
balancer_namestringWhich gRPC client-side load balancer to use for requests.round_robinno
authoritystringOverrides the default :authority header in gRPC requests from the gRPC client.no
authcapsule(otelcol.Handler)Handler from an otelcol.auth component to use for authenticating requests.no

By default, requests are compressed with Gzip. The compression argument controls which compression mechanism to use. Supported strings are:

  • "gzip"
  • "zlib"
  • "deflate"
  • "snappy"
  • "zstd"

If you set compression to "none" or an empty string "", the requests aren’t compressed.

The supported values for balancer_name are listed in the gRPC documentation on Load balancing:

  • pick_first: Tries to connect to the first address, uses it for all RPCs if it connects, or tries the next address if it fails (and keeps doing that until one connection is successful). Because of this, all the RPCs will be sent to the same backend.
  • round_robin: Connects to all the addresses it sees and sends an RPC to each backend one at a time in order. For example, the first RPC is sent to backend-1, the second RPC is sent to backend-2, and the third RPC is sent to backend-1.

The :authority header in gRPC specifies the host to which the request is being sent. It’s similar to the Host header in HTTP requests. By default, the value for :authority is derived from the endpoint URL used for the gRPC call. Overriding :authority could be useful when routing traffic using a proxy like Envoy, which makes routing decisions based on the value of the :authority header.

You can configure an HTTP proxy with the following environment variables:

  • HTTPS_PROXY
  • NO_PROXY

The HTTPS_PROXY environment variable specifies a URL to use for proxying requests. Connections to the proxy are established via the HTTP CONNECT method.

The NO_PROXY environment variable is an optional list of comma-separated hostnames for which the HTTPS proxy should not be used. Each hostname can be provided as an IP address (1.2.3.4), an IP address in CIDR notation (1.2.3.4/8), a domain name (example.com), or *. A domain name matches that domain and all subdomains. A domain name with a leading “.” (.example.com) matches subdomains only. NO_PROXY is only read when HTTPS_PROXY is set.

Because otelcol.exporter.loadbalancing uses gRPC, the configured proxy server must be able to handle and proxy HTTP/2 traffic.

tls block

The tls block configures TLS settings used for the connection to the gRPC server.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
ca_filestringPath to the CA file.no
ca_pemstringCA PEM-encoded text to validate the server with.no
cert_filestringPath to the TLS certificate.no
cert_pemstringCertificate PEM-encoded text for client authentication.no
insecure_skip_verifybooleanIgnores insecure server TLS certificates.no
include_system_ca_certs_poolbooleanWhether to load the system certificate authorities pool alongside the certificate authority.falseno
insecurebooleanDisables TLS when connecting to the configured server.no
key_filestringPath to the TLS certificate key.no
key_pemsecretKey PEM-encoded text for client authentication.no
max_versionstringMaximum acceptable TLS version for connections."TLS 1.3"no
min_versionstringMinimum acceptable TLS version for connections."TLS 1.2"no
cipher_suiteslist(string)A list of TLS cipher suites that the TLS transport can use.[]no
reload_intervaldurationThe duration after which the certificate is reloaded."0s"no
server_namestringVerifies the hostname of server certificates when set.no

If the server doesn’t support TLS, you must set the insecure argument to true.

To disable tls for connections to the server, set the insecure argument to true.

If reload_interval is set to "0s", the certificate never reloaded.

The following pairs of arguments are mutually exclusive and can’t both be set simultaneously:

  • ca_pem and ca_file
  • cert_pem and cert_file
  • key_pem and key_file

If cipher_suites is left blank, a safe default list is used. Refer to the Go TLS documentation for a list of supported cipher suites.

keepalive block

The keepalive block configures keepalive settings for gRPC client connections.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
ping_waitdurationHow often to ping the server after no activity.no
ping_response_timeoutdurationTime to wait before closing inactive connections if the server does not respond to a ping.no
ping_without_streambooleanSend pings even if there is no active stream request.no

queue block

The queue block configures an in-memory buffer of batches before data is sent to the gRPC server.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
enabledbooleanEnables an in-memory buffer before sending data to the client.trueno
num_consumersnumberNumber of readers to send batches written to the queue in parallel.10no
queue_sizenumberMaximum number of unwritten batches allowed in the queue at the same time.1000no

When enabled is true, data is first written to an in-memory buffer before sending it to the configured server. Batches sent to the component’s input exported field are added to the buffer as long as the number of unsent batches doesn’t exceed the configured queue_size.

queue_size determines how long an endpoint outage is tolerated. Assuming 100 requests/second, the default queue size 1000 provides about 10 seconds of outage tolerance. To calculate the correct value for queue_size, multiply the average number of outgoing requests per second by the time in seconds that outages are tolerated. A very high value can cause Out Of Memory (OOM) kills.

The num_consumers argument controls how many readers read from the buffer and send data in parallel. Larger values of num_consumers allow data to be sent more quickly at the expense of increased network traffic.

retry block

The retry block configures how failed requests to the gRPC server are retried.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
enabledbooleanEnables retrying failed requests.trueno
initial_intervaldurationInitial time to wait before retrying a failed request."5s"no
max_elapsed_timedurationMaximum time to wait before discarding a failed batch."5m"no
max_intervaldurationMaximum time to wait between retries."30s"no
multipliernumberFactor to grow wait time before retrying.1.5no
randomization_factornumberFactor to randomize wait time before retrying.0.5no

When enabled is true, failed batches are retried after a given interval. The initial_interval argument specifies how long to wait before the first retry attempt. If requests continue to fail, the time to wait before retrying increases by the factor specified by the multiplier argument, which must be greater than 1.0. The max_interval argument specifies the upper bound of how long to wait between retries.

The randomization_factor argument is useful for adding jitter between retrying Alloy instances. If randomization_factor is greater than 0, the wait time before retries is multiplied by a random factor in the range [ I - randomization_factor * I, I + randomization_factor * I], where I is the current interval.

If a batch hasn’t been sent successfully, it’s discarded after the time specified by max_elapsed_time elapses. If max_elapsed_time is set to "0s", failed requests are retried forever until they succeed.

debug_metrics block

The debug_metrics block configures the metrics that this component generates to monitor its state.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
disable_high_cardinality_metricsbooleanWhether to disable certain high cardinality metrics.trueno
levelstringControls the level of detail for metrics emitted by the wrapped collector."detailed"no

disable_high_cardinality_metrics is the Grafana Alloy equivalent to the telemetry.disableHighCardinalityMetrics feature gate in the OpenTelemetry Collector. It removes attributes that could cause high cardinality metrics. For example, attributes with IP addresses and port numbers in metrics about HTTP and gRPC connections are removed.

Note

If configured, disable_high_cardinality_metrics only applies to otelcol.exporter.* and otelcol.receiver.* components.

level is the Alloy equivalent to the telemetry.metrics.level feature gate in the OpenTelemetry Collector. Possible values are "none", "basic", "normal" and "detailed".

Exported fields

The following fields are exported and can be referenced by other components:

NameTypeDescription
inputotelcol.ConsumerA value that other components can use to send telemetry data to.

input accepts otelcol.Consumer OTLP-formatted data for telemetry signals of these types:

  • logs
  • traces

Choose a load balancing strategy

Different Alloy components require different load-balancing strategies. The use of otelcol.exporter.loadbalancing is only necessary for stateful components.

otelcol.processor.tail_sampling

All spans for a given trace ID must go to the same tail sampling Alloy instance.

  • This can be done by configuring otelcol.exporter.loadbalancing with routing_key = "traceID".
  • If you do not configure routing_key = "traceID", the sampling decision may be incorrect. The tail sampler must have a full view of the trace when making a sampling decision. For example, a rate_limiting tail sampling strategy may incorrectly pass through more spans than expected if the spans for the same trace are spread out to more than one Alloy instance.

otelcol.connector.spanmetrics

All spans for a given service.name must go to the same spanmetrics Alloy.

  • This can be done by configuring otelcol.exporter.loadbalancing with routing_key = "service".
  • If you do not configure routing_key = "service", metrics generated from spans might be incorrect. For example, if similar spans for the same service.name end up on different Alloy instances, the two Alloys will have identical metric series for calculating span latency, errors, and number of requests. When both Alloy instances attempt to write the metrics to a database such as Mimir, the series may clash with each other. At best, this will lead to an error in Alloy and a rejected write to the metrics database. At worst, it could lead to inaccurate data due to overlapping samples for the metric series.

However, there are ways to scale otelcol.connector.spanmetrics without the need for a load balancer:

  1. Each Alloy could add an attribute such as collector.id in order to make its series unique. Then, for example, you could use a sum by PromQL query to aggregate the metrics from different Alloys. Unfortunately, an extra collector.id attribute has a downside that the metrics stored in the database will have higher cardinality .
  2. Spanmetrics could be generated in the backend database instead of in Alloy. For example, span metrics can be generated in Grafana Cloud by the Tempo traces database.

otelcol.connector.servicegraph

It is challenging to scale otelcol.connector.servicegraph over multiple Alloy instances. For otelcol.connector.servicegraph to work correctly, each “client” span must be paired with a “server” span to calculate metrics such as span duration. If a “client” span goes to one Alloy, but a “server” span goes to another Alloy, then no single Alloy will be able to pair the spans and a metric won’t be generated.

otelcol.exporter.loadbalancing can solve this problem partially if it is configured with routing_key = "traceID". Each Alloy will then be able to calculate a service graph for each “client”/“server” pair in a trace. It is possible to have a span with similar “server”/“client” values in a different trace, processed by another Alloy. If two different Alloy instances process similar “server”/“client” spans, they will generate the same service graph metric series. If the series from two Alloy are the same, this will lead to issues when writing them to the backend database. You could differentiate the series by adding an attribute such as "collector.id". The series from different Alloys can be aggregated using PromQL queries on the backed metrics database. If the metrics are stored in Grafana Mimir, cardinality issues due to "collector.id" labels can be solved using Adaptive Metrics.

A simpler, more scalable alternative to generating service graph metrics in Alloy is to generate them entirely in the backend database. For example, service graphs can be generated in Grafana Cloud by the Tempo traces database.

Mixing stateful components

Different Alloy components may require a different routing_key for otelcol.exporter.loadbalancing. For example, otelcol.processor.tail_sampling requires routing_key = "traceID" whereas otelcol.connector.spanmetrics requires routing_key = "service". To load balance both types of components, two different sets of load balancers have to be set up:

  • One set of otelcol.exporter.loadbalancing with routing_key = "traceID", sending spans to Alloys doing tail sampling and no span metrics.
  • Another set of otelcol.exporter.loadbalancing with routing_key = "service", sending spans to Alloys doing span metrics and no service graphs.

Unfortunately, this can also lead to side effects. For example, if otelcol.connector.spanmetrics is configured to generate exemplars, the tail sampling Alloys might drop the trace that the exemplar points to. There is no coordination between the tail sampling Alloys and the span metrics Alloys to make sure trace IDs for exemplars are kept.

Component health

otelcol.exporter.loadbalancing is only reported as unhealthy if given an invalid configuration.

Debug information

otelcol.exporter.loadbalancing does not expose any component-specific debug information.

Examples

Static resolver

This example accepts OTLP logs and traces over gRPC. It then sends them in a load-balanced way to “localhost:55690” or “localhost:55700”.

alloy
otelcol.receiver.otlp "default" {
    grpc {}
    output {
        traces  = [otelcol.exporter.loadbalancing.default.input]
        logs    = [otelcol.exporter.loadbalancing.default.input]
    }
}

otelcol.exporter.loadbalancing "default" {
    resolver {
        static {
            hostnames = ["localhost:55690", "localhost:55700"]
        }
    }
    protocol {
        otlp {
            client {}
        }
    }
}

DNS resolver

When configured with a dns resolver, otelcol.exporter.loadbalancing will do a DNS lookup on regular intervals. Spans are exported to the addresses the DNS lookup returned.

alloy
otelcol.exporter.loadbalancing "default" {
    resolver {
        dns {
            hostname = "alloy-traces-sampling.grafana-cloud-monitoring.svc.cluster.local"
            port     = "34621"
            interval = "5s"
            timeout  = "1s"
        }
    }
    protocol {
        otlp {
            client {}
        }
    }
}

The following example shows a Kubernetes configuration that configures two sets of Alloys:

  • A pool of load-balancer Alloys:
    • Spans are received from instrumented applications via otelcol.receiver.otlp
    • Spans are exported via otelcol.exporter.loadbalancing.
  • A pool of sampling Alloys:
    • The sampling Alloys run behind a headless service to enable the load-balancer Alloys to discover them.
    • Spans are received from the load-balancer Alloys via otelcol.receiver.otlp
    • Traces are sampled via otelcol.processor.tail_sampling.
    • The traces are exported via otelcol.exporter.otlp to an OTLP-compatible database such as Tempo.

You must fill in the correct OTLP credentials prior to running the example. You can use k3d to start the example:

bash
k3d cluster create alloy-lb-test
kubectl apply -f kubernetes_config.yaml

To delete the cluster, run:

bash
k3d cluster delete alloy-lb-test

Kubernetes resolver

When you configure otelcol.exporter.loadbalancing with a kubernetes resolver, the Kubernetes API notifies Alloy whenever a new pod is added or removed from the service. Spans are exported to the addresses from the Kubernetes API, combined with all the possible ports.

alloy
otelcol.exporter.loadbalancing "default" {
    resolver {
        kubernetes {
            service = "alloy-traces-headless"
            ports   = [ 34621 ]
        }
    }
    protocol {
        otlp {
            client {}
        }
    }
}

The following example shows a Kubernetes configuration that sets up two sets of Alloys:

  • A pool of load-balancer Alloys:
    • Spans are received from instrumented applications via otelcol.receiver.otlp
    • Spans are exported via otelcol.exporter.loadbalancing.
    • The load-balancer Alloys will get notified by the Kubernetes API any time a pod is added or removed from the pool of sampling Alloys.
  • A pool of sampling Alloys:
    • The sampling Alloys do not need to run behind a headless service.
    • Spans are received from the load-balancer Alloys via otelcol.receiver.otlp
    • Traces are sampled via otelcol.processor.tail_sampling.
    • The traces are exported via otelcol.exporter.otlp to a an OTLP-compatible database such as Tempo.

You must fill in the correct OTLP credentials prior to running the example. You can use k3d to start the example:

bash
k3d cluster create alloy-lb-test
kubectl apply -f kubernetes_config.yaml

To delete the cluster, run:

bash
k3d cluster delete alloy-lb-test

Compatible components

otelcol.exporter.loadbalancing has exports that can be consumed by the following components:

Note

Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details.