InfluxDB metricsExperimental: Push metrics from Influx Telegraf to Prometheus

Experimental: Push metrics from Influx Telegraf to Prometheus

Note: This is an experimental product, if you run into issues or want to give feedback please contact We’re hard at work improving this and would love to hear your feedback however, we would not be able to provide an SLA for this endpoint as of now.

We now have beta support to ingest Influx Line protocol into Grafana Cloud. You can now point your Telegraf and other services pushing Line protocol metrics to GrafanaCloud via HTTP.

For example, if you are using Telegraf to push metrics, the following configuration is required:

 urls = ["<Your Metrics instance remote_write endpoint>/api/v1/push/influx"]
 ## HTTP Basic Auth
 username = "Your Metrics instance ID"
 password = "Your API Key"

You can find the URL, username, and password for your metrics endpoint by clicking on Details in the Prometheus card of the Cloud Portal. You need to make sure the URL is correct which involves removing the /api/prom/push. For example if the remote_write endpoint is, your influx endpoint will be

Influx line protocol has the following structure: metric_name (set of k=v tags) (set of k=v fields) timestamp

We convert the above into the following series in Prometheus:

for each (field, value) in fields:
    metric_name_field{tags...} value @timestamp

For example: diskio,host=work,name=dm-1 write_bytes=651264i,read_time=29i 1612356006000

Will be converted to:

diskio_write_bytes{host="work", name="dm-1"} 651264 1612356006000
diskio_read_time{host="work", name="dm-1"} 29 1612356006000

Pushing from applications directly

Note, if you’re pushing metrics directly, use the same endpoint as above but with: <Your Metrics instance remote_write endpoint>/api/v1/push/influx/write

Pushing using the remote-write output

Telegraf also supports using a native Prometheus remote-write endpoint.

You can use the same URL given in your cloud portal for this. No need to change the URL in anyway.

Current limitations:

  1. We don’t ingest data out of order, but we have it in our roadmap to support this.
  2. No matter what the precision you send the data in, we will store it in millisecond precision. We currently don’t have support in our platform to do higher than millisecond precision.
  3. We support only float64 on our platform. This means all the integer and boolean values will be cast into floating point before storage. True becomes 1 and false becomes 0. We currently don’t ingest string values.
  4. We don’t support queries via Flux, you will need to use PromQL, but we have it part of our roadmap to support Flux.