---
title: "otelcol.exporter.datadog | Grafana Alloy documentation"
description: "Learn about otelcol.exporter.datadog"
---

# `otelcol.exporter.datadog`

> **Community**: This component is developed, maintained, and supported by the Alloy user community. Grafana doesn’t offer commercial support for this component. To enable and use community components, you must set the `--feature.community-components.enabled` [flag](/docs/alloy/latest/reference/cli/run/) to `true`.
> 
> Refer to [Community components](../../../../get-started/components/community-components/) for more information.

`otelcol.exporter.datadog` accepts metrics and traces telemetry data from other `otelcol` components and sends it to Datadog.

> Note
> 
> `otelcol.exporter.datadog` is a wrapper over the upstream OpenTelemetry Collector [`datadog`](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.147.0/exporter/datadogexporter) exporter. Bug reports or feature requests will be redirected to the upstream repository, if necessary.

You can specify multiple `otelcol.exporter.datadog` components by giving them different labels.

## Usage

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
otelcol.exporter.datadog "<LABEL>" {
    api {
        api_key = "<YOUR_API_KEY_HERE>"
    }
}
```

## Arguments

You can use the following arguments with `otelcol.exporter.datadog`:

Expand table

| Name                         | Type       | Description                                                                      | Default | Required |
|------------------------------|------------|----------------------------------------------------------------------------------|---------|----------|
| `hostname`                   | `string`   | The fallback hostname used for payloads without hostname-identifying attributes. |         | no       |
| `hostname_detection_timeout` | `duration` | The timeout for hostname detection.                                              | `25s`   | no       |
| `only_metadata`              | `bool`     | Whether to send only metadata.                                                   | `false` | no       |

If `hostname` is unset, the hostname is determined automatically. For more information, refer to the Datadog [Fallback hostname logic](https://docs.datadoghq.com/opentelemetry/schema_semantics/hostname/?tab=datadogexporter#fallback-hostname-logic) documentation. This option won’t change the hostname applied to metrics or traces if they already have hostname-identifying attributes.

## Blocks

You can use the following blocks with `otelcol.exporter.datadog`:

No valid configuration blocks found.

### `api`

Required

The `api` block configures authentication with the Datadog API. This is required to send telemetry to Datadog. If you don’t provide the `api` block, you can’t send telemetry to Datadog.

The following arguments are supported:

Expand table

| Name                  | Type     | Description                                           | Default           | Required |
|-----------------------|----------|-------------------------------------------------------|-------------------|----------|
| `api_key`             | `secret` | API Key for Datadog                                   |                   | yes      |
| `fail_on_invalid_key` | `bool`   | Whether to exit at startup on an invalid API key      | `false`           | no       |
| `site`                | `string` | The site of the Datadog intake to send Agent data to. | `"datadoghq.com"` | no       |

### `client`

The `client` block configures the HTTP client used by the component. Not all fields are supported by the Datadog Exporter.

The following arguments are supported:

Expand table

| Name                      | Type       | Description                                                                 | Default | Required |
|---------------------------|------------|-----------------------------------------------------------------------------|---------|----------|
| `disable_keep_alives`     | `bool`     | Disable HTTP keep-alive.                                                    |         | no       |
| `idle_conn_timeout`       | `duration` | Time to wait before an idle connection closes itself.                       | `"45s"` | no       |
| `insecure_skip_verify`    | `bool`     | Ignores insecure server TLS certificates.                                   |         | no       |
| `max_conns_per_host`      | `int`      | Limits the total (dialing,active, and idle) number of connections per host. |         | no       |
| `max_idle_conns_per_host` | `int`      | Limits the number of idle HTTP connections the host can keep open.          | `5`     | no       |
| `max_idle_conns`          | `int`      | Limits the number of idle HTTP connections the client can keep open.        | `100`   | no       |
| `read_buffer_size`        | `string`   | Size of the read buffer the HTTP client uses for reading server responses.  |         | no       |
| `timeout`                 | `duration` | Time to wait before marking a request as failed.                            | `"15s"` | no       |
| `write_buffer_size`       | `string`   | Size of the write buffer the HTTP client uses for writing requests.         |         | no       |

### `debug_metrics`

The `debug_metrics` block configures the metrics that this component generates to monitor its state.

The following arguments are supported:

Expand table

| Name                               | Type      | Description                                          | Default | Required |
|------------------------------------|-----------|------------------------------------------------------|---------|----------|
| `disable_high_cardinality_metrics` | `boolean` | Whether to disable certain high cardinality metrics. | `true`  | no       |

`disable_high_cardinality_metrics` is the Alloy equivalent to the `telemetry.disableHighCardinalityMetrics` feature gate in the OpenTelemetry Collector. It removes attributes that could cause high cardinality metrics. For example, attributes with IP addresses and port numbers in metrics about HTTP and gRPC connections are removed.

> Note
> 
> If configured, `disable_high_cardinality_metrics` only applies to `otelcol.exporter.*` and `otelcol.receiver.*` components.

### `host_metadata`

The `host_metadata` block configures the host metadata configuration. Host metadata is the information used to populate the infrastructure list and the host map, and provide host tags functionality within the Datadog app.

The following arguments are supported:

Expand table

| Name              | Type           | Description                                                | Default              | Required |
|-------------------|----------------|------------------------------------------------------------|----------------------|----------|
| `enabled`         | `bool`         | Enable the host metadata functionality                     | `true`               | no       |
| `hostname_source` | `string`       | Source for the hostname of host metadata.                  | `"config_or_system"` | no       |
| `tags`            | `list(string)` | List of host tags to be sent as part of the host metadata. |                      | no       |

By default, the exporter only sends host metadata for a single host, whose name is chosen according to `host_metadata::hostname_source`.

Valid values for `hostname_source` are:

- `"first_resource"` picks the host metadata hostname from the resource attributes on the first OTLP payload that gets to the exporter. If the first payload lacks hostname-like attributes, it will fallback to `config_or_system` behavior. **Don’t use this hostname source if receiving data from multiple hosts**.
- `"config_or_system"` picks the host metadata hostname from the `hostname` setting, falling back to system and cloud provider APIs.

### `logs`

The `logs` block configures the logs exporter settings.

The following arguments are supported:

Expand table

| Name                | Type     | Description                                                                                                                                   | Default                                    | Required |
|---------------------|----------|-----------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|----------|
| `batch_wait`        | `int`    | The maximum time in seconds the logs agent waits to fill each batch of logs before sending.                                                   | `5`                                        | no       |
| `compression_level` | `int`    | Accepts values from 0 (no compression) to 9 (maximum compression but higher resource usage). Only used if `use_compression` is set to `true`. | `6`                                        | no       |
| `endpoint`          | `string` | The host of the Datadog intake server to send logs to.                                                                                        | `"https://http-intake.logs.datadoghq.com"` | no       |
| `use_compression`   | `bool`   | Available when sending logs via HTTPS. Compresses logs if enabled.                                                                            | `true`                                     | no       |

If `use_compression` is disabled, `compression_level` has no effect.

If `endpoint` is unset, the value is obtained through the `site` parameter in the \[`api`]\[] section.

### `metrics`

The `metrics` block configures Metric specific exporter settings.

The following arguments are supported:

Expand table

| Name        | Type     | Description                                                             | Default                       | Required |
|-------------|----------|-------------------------------------------------------------------------|-------------------------------|----------|
| `delta_ttl` | `number` | The number of seconds values are kept in memory for calculating deltas. | `3600`                        | no       |
| `endpoint`  | `string` | The host of the Datadog intake server to send metrics to.               | `"https://api.datadoghq.com"` | no       |

Any of the subset of resource attributes in the [semantic mapping list](https://docs.datadoghq.com/opentelemetry/guide/semantic_mapping/) are converted to Datadog conventions and set to metric tags whether `resource_attributes_as_tags` is enabled or not.

If `endpoint` is unset, the value is obtained through the `site` parameter in the \[`api`]\[] section.

### `exporter`

The `exporter` block configures the metric exporter settings.

The following arguments are supported:

Expand table

| Name                                     | Type   | Description                                                                               | Default | Required |
|------------------------------------------|--------|-------------------------------------------------------------------------------------------|---------|----------|
| `instrumentation_scope_metadata_as_tags` | `bool` | Set to `false` to not add metadata about the instrumentation scope that created a metric. | `true`  | no       |
| `resource_attributes_as_tags`            | `bool` | Set to `true` to add resource attributes of a metric to its metric tags.                  | `false` | no       |

### `histograms`

The `histograms` block configures the histogram settings.

The following arguments are supported:

Expand table

| Name                       | Type     | Description                                                               | Default           | Required |
|----------------------------|----------|---------------------------------------------------------------------------|-------------------|----------|
| `mode`                     | `string` | How to report histograms.                                                 | `"distributions"` | no       |
| `send_aggregation_metrics` | `bool`   | Whether to report sum, count, min, and max as separate histogram metrics. | `false`           | no       |

Valid values for `mode` are:

- `"distributions"` to report metrics as Datadog distributions (recommended).
- `"nobuckets"` to not report bucket metrics.
- `"counters"` to report one metric per histogram bucket.

### `summaries`

The `summaries` block configures the summary settings.

The following arguments are supported:

Expand table

| Name   | Type     | Description              | Default    | Required |
|--------|----------|--------------------------|------------|----------|
| `mode` | `string` | How to report summaries. | `"gauges"` | no       |

Valid values for `mode` are:

- `"noquantiles"` to not report quantile metrics.
- `"gauges"` to report one gauge metric per quantile.

### `sums`

The `sums` block configures the sums settings.

The following arguments are supported:

Expand table

| Name                                 | Type     | Description                                                    | Default      | Required |
|--------------------------------------|----------|----------------------------------------------------------------|--------------|----------|
| `cumulative_monotonic_mode`          | `string` | How to report cumulative monotonic sums.                       | `"to_delta"` | no       |
| `initial_cumulative_monotonic_value` | `string` | How to report the initial value for cumulative monotonic sums. | `"auto"`     | no       |

Valid values for `cumulative_monotonic_mode` are:

- `"to_delta"` to calculate delta for sum in the client side and report as Datadog counts.
- `"raw_value"` to report the raw value as a Datadog gauge.

Valid values for `initial_cumulative_monotonic_value` are:

- `"auto"` reports the initial value if its start timestamp is set, and it happens after the process was started.
- `"drop"` always drops the initial value.
- `"keep"` always reports the initial value.

### `retry_on_failure`

The `retry_on_failure` block configures how failed requests to Datadog are retried.

The following arguments are supported:

Expand table

| Name                   | Type       | Description                                            | Default | Required |
|------------------------|------------|--------------------------------------------------------|---------|----------|
| `enabled`              | `boolean`  | Enables retrying failed requests.                      | `true`  | no       |
| `initial_interval`     | `duration` | Initial time to wait before retrying a failed request. | `"5s"`  | no       |
| `max_elapsed_time`     | `duration` | Maximum time to wait before discarding a failed batch. | `"5m"`  | no       |
| `max_interval`         | `duration` | Maximum time to wait between retries.                  | `"30s"` | no       |
| `multiplier`           | `number`   | Factor to grow wait time before retrying.              | `1.5`   | no       |
| `randomization_factor` | `number`   | Factor to randomize wait time before retrying.         | `0.5`   | no       |

When `enabled` is `true`, failed batches are retried after a given interval. The `initial_interval` argument specifies how long to wait before the first retry attempt. If requests continue to fail, the time to wait before retrying increases by the factor specified by the `multiplier` argument, which must be greater than `1.0`. The `max_interval` argument specifies the upper bound of how long to wait between retries.

The `randomization_factor` argument is useful for adding jitter between retrying Alloy instances. If `randomization_factor` is greater than `0`, the wait time before retries is multiplied by a random factor in the range `[ I - randomization_factor * I, I + randomization_factor * I]`, where `I` is the current interval.

If a batch hasn’t been sent successfully, it’s discarded after the time specified by `max_elapsed_time` elapses. If `max_elapsed_time` is set to `"0s"`, failed requests are retried forever until they succeed.

### `sending_queue`

The `sending_queue` block configures queueing and batching for the exporter.

The following arguments are supported:

Expand table

| Name                | Type                       | Description                                                                                | Default      | Required |
|---------------------|----------------------------|--------------------------------------------------------------------------------------------|--------------|----------|
| `block_on_overflow` | `boolean`                  | The behavior when the component’s `TotalSize` limit is reached.                            | `false`      | no       |
| `enabled`           | `boolean`                  | Enables a buffer before sending data to the client.                                        | `true`       | no       |
| `num_consumers`     | `number`                   | Number of readers to send batches written to the queue in parallel.                        | `10`         | no       |
| `queue_size`        | `number`                   | Maximum number of unwritten batches allowed in the queue at the same time.                 | `1000`       | no       |
| `sizer`             | `string`                   | How the queue and batching is measured.                                                    | `"requests"` | no       |
| `wait_for_result`   | `boolean`                  | Determines if incoming requests are blocked until the request is processed or not.         | `false`      | no       |
| `storage`           | `capsule(otelcol.Handler)` | Handler from an `otelcol.storage` component to use to enable a persistent queue mechanism. |              | no       |

The `blocking` argument is deprecated in favor of the `block_on_overflow` argument.

When `block_on_overflow` is `true`, the component will wait for space. Otherwise, operations will immediately return a retryable error.

When `enabled` is `true`, data is first written to an in-memory buffer before sending it to the configured server. Batches sent to the component’s `input` exported field are added to the buffer as long as the number of unsent batches doesn’t exceed the configured `queue_size`.

`queue_size` determines how long an endpoint outage is tolerated. Assuming 100 requests/second, the default queue size `1000` provides about 10 seconds of outage tolerance. To calculate the correct value for `queue_size`, multiply the average number of outgoing requests per second by the time in seconds that outages are tolerated. A very high value can cause Out Of Memory (OOM) kills.

The `sizer` argument could be set to:

- `requests`: number of incoming batches of metrics, logs, traces (the most performant option).
- `items`: number of the smallest parts of each signal (spans, metric data points, log records).
- `bytes`: the size of serialized data in bytes (the least performant option).

The `num_consumers` argument controls how many readers read from the buffer and send data in parallel. Larger values of `num_consumers` allow data to be sent more quickly at the expense of increased network traffic.

If an `otelcol.storage.*` component is configured and provided in the queue’s `storage` argument, the queue uses the provided storage extension to provide a persistent queue and the queue is no longer stored in memory. Any data persisted will be processed on startup if Alloy is killed or restarted. Refer to the [exporterhelper documentation](https://github.com/open-telemetry/opentelemetry-collector/blob/v0.147.0/exporter/exporterhelper/README.md#persistent-queue) in the OpenTelemetry Collector repository for more details.

### `batch`

The `batch` block configures batching requests based on a timeout and a minimum number of items.

Batching is disabled by default. To enable it, explicitly include `batch {}` in your Alloy configuration. You do not need to include a `batch {}` block in your `otelcol.exporter` if you already use a `otelcol.processor.batch` component, although batching in the exporter is the preferred method because it is more flexible.

The following arguments are supported:

Expand table

| Name            | Type       | Description                                                                                                | Default   | Required |
|-----------------|------------|------------------------------------------------------------------------------------------------------------|-----------|----------|
| `flush_timeout` | `duration` | Time after which a batch will be sent regardless of its size. Must be a non-zero value.                    | `"200ms"` | no       |
| `min_size`      | `number`   | The minimum size of a batch.                                                                               | `2000`    | no       |
| `max_size`      | `number`   | The maximum size of a batch, enables batch splitting.                                                      | `3000`    | no       |
| `sizer`         | `string`   | How the queue and batching is measured. Overrides the sizer set at the `sending_queue` level for batching. | `"items"` | no       |

If configured, `max_size` must be greater than or equal to `min_size`.

The `sizer` argument can be set to:

- `items`: The number of the smallest parts of each span, metric data point, or log record.
- `bytes`: the size of serialized data in bytes (the least performant option).

### `traces`

The `traces` block configures the trace exporter settings.

The following arguments are supported:

Expand table

| Name                             | Type           | Description                                                                                        | Default                               | Required |
|----------------------------------|----------------|----------------------------------------------------------------------------------------------------|---------------------------------------|----------|
| `compute_stats_by_span_kind`     | `bool`         | Enables APM stats computation based on `span.kind`                                                 | `true`                                | no       |
| `compute_top_level_by_span_kind` | `bool`         | Enables top-level span identification based on `span.kind`                                         | `false`                               | no       |
| `endpoint`                       | `string`       | The host of the Datadog intake server to send traces to.                                           | `"https://trace.agent.datadoghq.com"` | no       |
| `ignore_resources`               | `list(string)` | A blocklist of regular expressions can be provided to disable traces based on their resource name. |                                       | no       |
| `peer_tags_aggregation`          | `bool`         | Enables aggregation of peer related tags in Datadog exporter                                       | `false`                               | no       |
| `peer_tags`                      | `list(string)` | List of supplementary peer tags that go beyond the defaults.                                       |                                       | no       |
| `span_name_as_resource_name`     | `bool`         | Use OpenTelemetry semantic convention for span naming                                              | `true`                                | no       |
| `span_name_remappings`           | `map(string)`  | A map of Datadog span operation name keys and preferred name values to update those names to.      |                                       | no       |
| `trace_buffer`                   | `number`       | Specifies the number of outgoing trace payloads to buffer before dropping                          | `10`                                  | no       |

If `compute_stats_by_span_kind` is disabled, only top-level and measured spans will have stats computed. If you are sending OTel traces and want stats on non-top-level spans, this flag must be set to `true`. If you are sending OTel traces and don’t want stats computed by span kind, you must disable this flag and disable `compute_top_level_by_span_kind`.

If `endpoint` is unset, the value is obtained through the `site` parameter in the \[`api`]\[] section.

## Exported fields

The following fields are exported and can be referenced by other components:

Expand table

| Name    | Type               | Description                                                 |
|---------|--------------------|-------------------------------------------------------------|
| `input` | `otelcol.Consumer` | A value other components can use to send telemetry data to. |

`input` accepts `otelcol.Consumer` data for any telemetry signal (metrics, logs, or traces).

## Component health

`otelcol.exporter.datadog` is only reported as unhealthy if given an invalid configuration.

## Debug information

`otelcol.exporter.datadog` doesn’t expose any component-specific debug information.

## Example

### Forward Prometheus Metrics

This example forwards Prometheus metrics from Alloy through a receiver for conversion to Open Telemetry format before finally sending them to Datadog. If you are using the US Datadog APIs, the `api` field is required for the exporter to function.

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
prometheus.exporter.self "default" {
}

prometheus.scrape "metamonitoring" {
  targets    = prometheus.exporter.self.default.targets
  forward_to = [otelcol.receiver.prometheus.default.receiver]
}

otelcol.receiver.prometheus "default" {
  output {
    metrics = [otelcol.exporter.datadog.default.input]
  }
}


otelcol.exporter.datadog "default" {
    api {
        api_key = "API_KEY"
    }

     metrics {
        endpoint = "https://api.ap1.datadoghq.com"
        resource_attributes_as_tags = true
    }
}
```

### Full OTel pipeline

This example forwards metrics and traces received in Datadog format to Alloy, converts them to OTel format, and exports them to Datadog.

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
otelcol.receiver.datadog "default" {
    output {
        metrics = [otelcol.exporter.otlphttp.default.input, otelcol.exporter.datadog.default input]
        traces  = [otelcol.exporter.otlphttp.default.input, otelcol.exporter.datadog.default.input]
    }
}

otelcol.exporter.otlphttp "default" {
    client {
        endpoint = "database:4317"
    }
}

otelcol.exporter.datadog "default" {
    client {
        timeout = "10s"
    }

    api {
        api_key             = "abc"
        fail_on_invalid_key = true
    }

    traces {
        endpoint             = "https://trace.agent.datadoghq.com"
        ignore_resources     = ["(GET|POST) /healthcheck"]
        span_name_remappings = {
            "instrumentation:express.server" = "express",
        }
    }

    metrics {
        delta_ttl = 1200
        endpoint  = "https://api.datadoghq.com"

        exporter {
            resource_attributes_as_tags = true
        }

        histograms {
            mode = "counters"
        }

        sums {
            initial_cumulative_monotonic_value = "keep"
        }

        summaries {
            mode = "noquantiles"
        }
    }
}
```

## Compatible components

`otelcol.exporter.datadog` has exports that can be consumed by the following components:

- Components that consume [OpenTelemetry `otelcol.Consumer`](../../../compatibility/#opentelemetry-otelcolconsumer-consumers)

> Note
> 
> Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details.
