Help build the future of open source observability software Open positions

Check out the open source projects we support Downloads

The actually useful free plan

Grafana Cloud Free Tier
check

10k series Prometheus metrics

check

50GB logs, 50GB traces, 50GB profiles

check

500VUk k6 testing

check

20+ Enterprise data source plugins

check

100+ pre-built solutions

Featured webinar

Getting started with grafana LGTM stack

Getting started with managing your metrics, logs, and traces using Grafana

Learn how to unify, correlate, and visualize data with dashboards using Grafana.

Distributed performance testing for Kubernetes environments: Grafana k6 Operator 1.0 is here

Distributed performance testing for Kubernetes environments: Grafana k6 Operator 1.0 is here

2025-09-16 7 min

Performance testing is critical to build reliable applications, but testing at scale, especially inside modern Kubernetes environments, can be a challenge. For example, how do you coordinate tests across multiple nodes, test private services without compromising security, or even do both at once? And most importantly, how do you do all this without adding too much operational complexity to your stack?

This is where Grafana k6 Operator comes in. What started as an experiment by k6 developer advocate Simon Aronsson back in 2020 has grown into a powerful open source project that makes it easier to run, manage, and scale distributed performance tests directly on your Kubernetes clusters. And today, we’re proud to share that k6 Operator has officially reached its 1.0 release. 🎉

In this post, we’ll walk you through how k6 Operator works, how it’s evolved over the years, and what’s new in the 1.0 release.  

What is k6 Operator and how does it work?

Grafana k6 is an open source load and performance testing platform. k6 Operator, specifically, is a Kubernetes operator that you can use to run distributed k6 tests in your Kubernetes cluster.

When your applications are deployed in Kubernetes, k6 Operator makes it easy to deploy and manage k6 tests in that same environment. It works by using a Custom Resource Definition (CRD) to describe a test run, then bootstrapping the necessary pods to execute it. 

The k6 Operator defines two main CRDs:

TestRun: Declaratively runs a k6 test in Kubernetes. As of now, the TestRun CRD (depicted in the diagram below) is the simplest way to run a distributed k6 test with OSS tooling, assuming you have a Kubernetes cluster.

A diagram depicting the workflow of a TestRun CRD, showing a Kubernetes cluster with k6 Operator, an initializer pod, starter pod, runner pods, and private service with HTTP and log connections.

PrivateLoadZone: Registers a Private Load Zone (PLZ) so that Grafana Cloud k6 — the fully managed performance testing platform powered by k6 OSS — can run tests inside your Kubernetes cluster. This allows you to perform tests with a simple k6 cloud run command rather than create a TestRun Kubernetes resource manually. 

A Grafana Cloud k6 dashboard displaying web performance metrics: LCP, FID, CLS, FCP, INP, TTFB with graphs and scores.

Overall, k6 Operator’s biggest advantage is that it simplifies running distributed k6 tests across multiple machines. These tests stay fully synchronized, ensuring accurate and reliable results at scale. It also simplifies the testing of applications within a private network, without breaking security or privacy policies.

Finally, as mentioned above, k6 Operator integrates with Grafana Cloud k6, allowing you to combine all these benefits with the additional feature set of Grafana Cloud k6. 

The evolution of k6 Operator: a community-driven project

Kubernetes operators are a relatively new technology, and there are no formal guidelines on how they should be implemented. It’s safe to say we’ve been growing our knowledge of them alongside our expanding community of users. As more people used k6 Operator over the years, and kindly provided their feedback, it became more and more clear what was missing or could be improved.

There were several major changes in the course of developing k6 Operator. One of the biggest hurdles came in 2023 with what we internally called the “idempotency epic.” Kubernetes operators are expected to be idempotent, but the API that k6 Operator interacts with isn’t. The full fix, released in v0.0.9 with help from a community contributor, remains the most complex challenge the project has overcome to date.

Other major changes were for usability improvements, with support for modern installation methods: a simple bundle file, including one CLI command, for quick prototyping or complex setups, along with a Helm chart for managing Kubernetes applications.

The OSS community has played a critical role in the evolution of k6 Operator. At the time of writing, there are 63 external contributors to the k6 Operator code base, and 99 out of 328 merged PRs are from those community members. They’ve extended CRDs with new features, fixed bugs, polished the Helm chart, and often implemented the very features they requested — from init containers to enhanced affinity rules.

Special thanks to Hans Knecht, who laid the groundwork for k6 Operator in 2021 with initial Istio support and CI pipelines, and to Rogerio Kino, who authored the largest PR from an external contributor with the initial implementation of the Helm chart.  

And of course, countless others have shaped k6 Operator along the way, so we extend a heartfelt thank you to everyone who has contributed. 💛💙

What’s new with k6 Operator 1.0?

Preparation of k6 Operator 1.0 came with some breaking changes, but they were all released in previous versions of the operator.

The 1.0 release itself contains bug fixes and other improvements, like new configurations to Helm chart and the ability to pass organization-specific metrics aggregation variables in PrivateLoadZone tests. But the main change is in our commitment to improve versioning, release schedules, and maintenance updates. 

Regular maintenance updates

k6 Operator does not depend on the latest version of Kubernetes to run. This is because our goal with the operator is to simplify your k6 testing journey, not force you into Kubernetes updates.

However, we recognize that the Kubernetes ecosystem is constantly evolving, so there will be regular maintenance updates to ensure k6 Operator includes recent bug fixes and major improvements from the Kubernetes community.

If there is a new feature released by Kubernetes that makes sense to reuse in the k6 Operator setting, it’ll be considered during a maintenance update or upon a user feature request. To learn more about our approach to maintenance, check out this doc.

Support for Semantic Versioning

k6 Operator now follows Semantic Versioning 2.0. Overall, this means there’s greater stability in how we approach the future development of k6 Operator. Additionally, semantic versioning will now convey the impact of a release via an increase of a corresponding version. Let’s briefly summarize what that means for k6 Operator, specifically.

The increase of a major version can happen in the following instances:

  1. Version upgrade of the existing CRD type.
  2. Major backwards incompatible change of the existing CRD type, no matter its version.
  3. Major change in functionality of the application.
  4. Other major changes as determined by internal Grafana Labs priorities.

New features or improvements will be released with an increase of a minor version. If there are preparations happening for a major breaking change, there will be a prior deprecation warning included in one of the previous minor versions. Bug fixes will be released as patch versions.

It should be noted that there can be different types of breaking changes and the meaning of a “breaking change” may depend on one’s definition. In the k6 Operator project, specifically, there are several types of APIs. We’re reserving the right to release a minor version with a breaking change when that breaking change is small and is limited to a CRD type with v1alpha1 version. However, any major breaking change will warrant a major release version.

To read more about our approach to versioning and stability guarantees, please visit this doc.

A more predictable release schedule

In the past, k6 Operator releases were usually driven by the amount of work merged into the main branch. If there was an insufficient number of merges, there was no release. Now, we are committing to a more predictable schedule, releasing a new minor version every 8 weeks. 

Conversely, we don’t promise that each minor release will contain new features: it might be as small as a simple maintenance update, but it will be released as a minor release on schedule. Regularity of new features depends on our users’ requests and internal prioritization, and is viable to change.

Patch releases, especially in the case of bugs, can be released more frequently. If there’s a critical bug, we’ll aim to have a patch release as soon as possible.

Major releases do not have a regular schedule at this time. Once there are plans about the next major release, we’ll share them with the community.

Even as we evolve our release process, we remain committed to our user-first approach and direct communication via issues, PRs, and release notes. We’ll still pay close attention and carefully evaluate each use case before adding new features, especially in case of breaking changes to a CRD.  We’ll continue to announce any deprecations or major changes via GitHub — and don’t worry, the release notes will still be hand-written and contain emojis!

Next steps 

We’ve published a guide on how to update k6 Operator — something that’s been long-requested by our users. You can also refer to the 1.0 release notes for more details on the latest release.

Lastly, thank you again to all our community members for helping shape the k6 Operator over the years. We’re excited to share this milestone with you, and look forward to our continued collaboration. 

Happy testing!