Data configurationMetricsInfluxDB metricsPush metrics from Influx Telegraf to Prometheus

Push metrics from Influx Telegraf to Prometheus

We now have support to ingest Influx Line protocol into Grafana Cloud. You can now point your Telegraf and other services pushing Line protocol metrics to GrafanaCloud via HTTP.

For example, if you are using Telegraf to push metrics, the following configuration is required:

 urls = ["<Modified metrics instance remote_write endpoint>/api/v1/push/influx"]
 ## HTTP Basic Auth
 username = "Your Metrics instance ID"
 password = "Your API Key"

You can find the URL, username, and password for your metrics endpoint by clicking on Details in the Prometheus card of the Cloud Portal. The URL is based on the Remote Write Endpoint URL but is changed slightly. You need to replace prometheus with influx and change the path from /api/prom/push to api/v1/push/influx. For example if the remote_write endpoint is, your influx endpoint will be

Influx line protocol has the following structure: metric_name (set of k=v tags) (set of k=v fields) timestamp

We convert the above into the following series in Prometheus:

for each (field, value) in fields:
    metric_name_field{tags...} value @timestamp

For example: diskio,host=work,name=dm-1 write_bytes=651264i,read_time=29i 1612356006000

Will be converted to:

diskio_write_bytes{host="work", name="dm-1"} 651264 1612356006000
diskio_read_time{host="work", name="dm-1"} 29 1612356006000

Pushing from applications directly

Note, if you’re pushing metrics directly, use the same endpoint as above but with: <Modified metrics instance remote_write endpoint>/api/v1/push/influx/write. For example:

Pushing using the remote-write output

Telegraf also supports using a native Prometheus remote-write endpoint.

You can use the same Prometheus URL given in your cloud portal for this. No need to change the URL in anyway.

Current limitations:

  1. No matter what the precision you send the data in, we will store it in millisecond precision. We currently don’t have support in our platform to do higher than millisecond precision.
  2. We support only float64 on our platform. This means all the integer and boolean values will be cast into floating point before storage. True becomes 1 and false becomes 0. We currently don’t ingest string values.
  3. We don’t support queries via Flux, you will need to use PromQL, but we have it part of our roadmap to support Flux.