Help build the future of open source observability software Open positions

Check out the open source projects we support Downloads

Grot cannot remember your choice unless you click the consent notice at the bottom.

Real-time monitoring of Formula 1 telemetry data on Kubernetes with Grafana, Apache Kafka, and Strimzi

Real-time monitoring of Formula 1 telemetry data on Kubernetes with Grafana, Apache Kafka, and Strimzi

February 2, 2021 7 min

Paolo is a Principal Software Engineer working for Red Hat on the messaging and IoT team. He is a maintainer of Strimzi, a CNCF sandbox project for running Apache Kafka on Kubernetes using operators. He is also a Microsoft MVP and Eclipse committer as maintainer for Kafka and MQTT Vert.x based components. He has spoken at numerous national and international conferences about Kafka, Strimzi, and IoT.

Data streaming is important for getting insights in real time and reacting to events as fast as possible. Its application is wide, from banking transactions and website click analytics to IoT devices and motorsports. 

The last example represents a really interesting use case. Think about Formula 1 circuits, where tons of telemetry data are sent by each car during the practice and qualifying sessions and analyzed by the engineers in order to improve the car performance and provide important tips to the drivers trying to win the races.

This article is going to describe how to set up a technology stack to monitor Formula 1 telemetry data — from its ingestion up to showing it on the engineer’s dashboards — and with everything running in a cloud native environment.

Getting the telemetry data

Of course, real Formula 1 data is not easily accessible, so the best way is to simulate it.

In this case, the F1 2020 game by Codemasters enables publishing all the available telemetry data during a race via UDP (out of the Microsoft Xbox for example); the specification of all possible packets is freely available online so it’s possible to decode them as I did, developing this formula1-telemetry library in Java.

The library contains the logic for decoding the raw bytes of each UDP packet and provides a higher-level model with all the information about drivers, car status, lap times, and so on. All this data can be easily consumed by the rest of the stack.

The Kubernetes-based ingesting platform

The first step is to ingest the telemetry data into a system that is able to provide low latency, high throughput, and durable storage. The ideal answer is Apache Kafka, but running Apache Kafka is typically not easy on bare metal and even more difficult on Kubernetes, which we want in order to enable a hybrid and cloud native solution. Luckily, the Strimzi project makes it really easy!

Strimzi is an open source CNCF sandbox project that enables a Kubernetes-native experience when deploying and managing an Apache Kafka cluster and its related ecosystem. Thanks to the Custom Resource Definition mechanism, provided by Kubernetes itself for extensibility, Strimzi offers some new custom resources for describing an Apache Kafka cluster, creating topics and users, and much more. It implements the operator pattern for doing so, taking care of the cluster for you from the installation to the upgrades.

You can easily define the number of brokers, the kind of storage, the configuration of the brokers and listeners, exposing metrics (using Prometheus and Grafana), and much more, all in a declarative way through a Kafka resource. The Strimzi operator will watch the custom resources  reflecting changes in a bunch of stateful sets, services, persistent volume claims and all the other needed Kubernetes native resources for running your Apache Kafka cluster.

More information on how Strimzi works is available on the official website and in the documentation; you can also find interesting articles and use cases on the blog.

Ingesting the telemetry data

In order to go from UDP-exposed data to Apache Kafka running on Kubernetes, an Apache Camel based application is used. Apache Camel is an open source integration framework that empowers you to quickly and easily integrate various systems consuming or producing data.

In this specific use case, the application implements multiple Camel routes to receive the data on UDP and dispatch it to different Apache Kafka topics containing:

  • the decoded raw packets with no additional processing
  • aggregated drivers data with all the corresponding information about car status, telemetry, motion, and lap data
  • race events like speed traps and best lap

Of course, this application runs locally where the simulated data is available through the F1 2020 game.

After the telemetry data is ingested into Apache Kafka running inside a Kubernetes cluster, it is made available for monitoring as time series using InfluxDB as a data source for Grafana.

In order to do so, another Apache Camel-based application, this time running on Kubernetes alongside the Apache Kafka cluster, is used to get the data from the topics and write them to an InfluxDB database.

The telemetry monitoring dashboards

At this point, showing the ingested telemetry data is really simple using Grafana. The usage of InfluxDB as a time series database fits pretty well because it’s just one of the many possible  data sources that can be used with Grafana.

One dashboard has multiple panels for showing the data plotted for all the drivers in order to allow comparison in real time. The telemetry section shows the speed and the engine as well as the correlation with throttle and brake.

The motion panel provides information about G force, both longitudinal and lateral. The longitudinal one can be correlated with throttle and brake graphs to show what kind of force a driver is subjected to on acceleration and deceleration. The lateral force is more interesting when the car is facing a turn, or if it’s hit by another car due to an incident during the race.

Speaking of incidents, the car status panels show the damage on the car wings (both front and rear); the fuel in the tank is shown in this section as well.

This dashboard shows the main events happening during the race; for example, the maximum speed reached by the drivers on the track as well as the fastest lap.

The previous dashboard is really useful to show and compare data across different drivers. A specific dashboard can also be created to show the main telemetry for every single driver, selected via a dashboard variable.

The top part provides some stat panels with laps timing, distance, and position of the driver during the race.

The bottom part has some graphs with each one showing more than just one type of telemetry data. For example, it’s interesting to correlate the throttle and the brake, as well as the speed and the engine revolutions per minute. Other useful information displayed here include the temperature of the brakes, the steering position, and the gear that the driver is using.

The last section shows information about tires in terms of their compound, age, wear (both using gauges and graphs), surface temperature, and damage.

With a cursor highlighting and moving across all three panels, it’s really easy to see how the car is behaving from all the above perspectives. The dashboard provides important insights to the telemetry engineer to understand where the car can be improved and at which point of the circuit the driver is losing time.

Conclusion

Using the simulated Formula 1 telemetry data as an example, it’s possible to show how different technologies can be tied together to build a complex real-time analytics pipeline from data capture to interpretation. You can watch everything running in this short video, and if you want to know more about this solution, join me and my colleague Tom Cooper at DevConf.CZ on February 19 for the session “Formula 1 telemetry processing using Kafka Streams.”

The cool part is that almost every part of this solution is open source. Obviously you will need a copy of the game and a console to play it on. But apart from that, Grafana, InfluxDB, and Strimzi are all available as open source, and you can find all the components I developed in a GitHub repository: all the instructions, configuration and dashboards necessary to see it in action yourself!

The final thing is to thank my 8-year old son for supplying the data used for this post! I haven’t had to force him too much to play for me. :-)