[PromCon EU Recap] 'Fixing' Remote Write

Published: 7 Jan 2020 RSS

During a lightning talk at PromCon EU last November, Grafana Labs developer Callum Styan, who contributes to Cortex and Prometheus, talked about improvements that have been made to Prometheus remote write – the result of about six months of work.

At the start, Styan provided a brief summary of remote write basics.

Remote Write Basics

He reminded the audience that remote write will – if you configure it to – send all of your metrics somewhere other than Prometheus for various reasons, such as long-term storage or some other system for a different query language.

In his case, he wanted it for Cortex and different query parallelization, which is provided by Cortex.

The Issues

Prometheus Architecture

Previously, the retrieval portion of Prometheus would scrape targets (see Retrieval, above), and remote write would make a copy of every sample and buffer unsent samples in memory until it could successfully send them on to the remote write system.

That wasn’t great, Styan said: “What happens if the remote right endpoint goes down, and you continually buffer data?”

As he explained, that buffer was a fixed size, so if filling the buffer didn’t OOMkill your Prometheus, remote write would eventually just start dropping samples when the buffer filled up, which is something you don’t want either. “If this remote storage system is supposed to be for your long-term storage but you don’t end up with all your data there,” he said, “that’s not great either.”

The Fix

Here is the way remote write works now:

It reads the same write-ahead log (WAL) that Prometheus is already generating.

“All of your data scraped, the write-ahead log is written, all that data is there until eventually something’s written to long-term storage on disk,” he said. “So we just tail the same write-ahead log that is already being written.”

If the buffer is full, it doesn’t read in more data.

“There’s still an internal fixed-size buffer,” he said. “The catch is, we don’t continue to read the write-ahead log, if the buffer is already full. So, in theory, remote write will no longer OOM your Prometheus.”

You have to cache labels.

There is one side-effect from the changes: “When you scrape your endpoints, you get the metric name, all the possible labels, and then the value,” he said. “But in the write-ahead log format you get the labels and metric name once, and then from there on out, you just have a reference to an ID, and you have to go look at those labels.”

“Newer versions of remote write in the happy path use more memory,” he added, “but in the worst-case scenario, it won’t crash.”

In other words … Prometheus’ write-ahead log contains all information needed for remote write to work, both scraped samples (timestamp and value) and records containing label data. By reusing the WAL, remote write essentially has an on-disk buffer of about 2h of data. This means that as long as remote write doesn’t fall too far behind, it will never lose data and doesn’t buffer too much into memory, avoiding OOMkill scenarios.

Additional Work

There has been more going on since these changes were made. “We’re still working on the sharding bits,” he said. “It mostly works now, other than when you restart Prometheus, it doesn’t necessarily catch up very well.”

Since the refactor discussed in this talk, additional minor improvements to remote write have been made:

For more on Styan’s remote write work, check out his blog post, “What’s New in Prometheus 2.8: WAL-Based Remote Write.”

Related Posts

If you missed PromCon, here’s our roundup of talks given by Grafanistas and more.
For PromCon this year, we made a donation to offset 200% of the carbon emissions from all conference goers’ travel and food.
At KubeCon, Tom Wilkie presented an updated version of his talk about blazin' fast PromQL. Here's a recap and steps to reproduce the results in your org.

Related Case Studies

DigitalOcean gains new insight with Grafana visualizations

The company relies on Grafana to be the consolidated data visualization and dashboard solution for sharing data.

"Grafana produces beautiful graphs we can send to our customers, works with our Chef deployment process, and is all hosted in-house."
– David Byrd, Product Manager, DigitalOcean

How Gojek is leveraging Cortex to keep up with its ever-growing scale

Gojek’s Lens monitoring system has 40+ tenants, for which Cortex handles about 1.2 million samples per second.

"The goal is to make sure that whenever a new service or team is created, they automatically get onboarded to the monitoring platform."
– Ankit Goel, Product Engineer, Gojek

How Grafana Cloud is enabling HotSchedules to develop next-generation applications

The visibility for all these metrics helps service delivery teams quickly iterate on new features.

"Grafana Cloud enables us to achieve observability bliss at HotSchedules. We don’t have to worry about scaling and maintaining the service."
– Denise Stockman, Director, Infrastructure, Hotschedules