Open source


Scenarios configure how VUs and iteration schedules in granular detail. With scenarios, you can model diverse workloads, or traffic patterns in load tests.

Benefits of using scenarios include:

  • Easier, more flexible test organization. You can declare multiple scenarios in the same script, and each one can independently execute a different JavaScript function.
  • Simulate more realistic traffic. Every scenario can use a distinct VU and iteration scheduling pattern, powered by a purpose-built executor.
  • Parallel or sequential workloads. Scenarios are independent from each other and run in parallel, though they can be made to appear sequential by setting the startTime property of each carefully.
  • Granular results analysis. Different environment variables and metric tags can be set per scenario.

Configure scenarios

To configure scenarios, use the scenarios key in the options object. You can give the scenario any name, as long as each scenario name in the script is unique.

The scenario name appears in the result summary, tags, and so on.

export const options = {
  scenarios: {
    example_scenario: {
      // name of the executor to use
      executor: 'shared-iterations',

      // common scenario configuration
      startTime: '10s',
      gracefulStop: '5s',
      env: { EXAMPLEVAR: 'testing' },
      tags: { example_tag: 'testing' },

      // executor-specific configuration
      vus: 10,
      iterations: 200,
      maxDuration: '10s',
    another_scenario: {

Scenario executors

For each k6 scenario, the VU workload is scheduled by an executor. Executors configure how long the test runs, whether traffic stays constant or changes, and whether the workload is modeled by VUs or by arrival rate (that is, open or closed models).

Your scenario object must define the executor property with one of the predefined executor names. Your choice of executor determines how k6 models load. Choices include:

Along with the generic scenario options, each executor object has additional options specific to its workload. For the full list, refer to Executors.

Scenario options

executor(required)stringUnique executor name. See the list of possible values in the executors section.-
startTimestringTime offset since the start of the test, at which point this scenario should begin execution."0s"
gracefulStopstringTime to wait for iterations to finish executing before stopping them forcefully. To learn more, read Graceful stop."30s"
execstringName of exported JS function to execute."default"
envobjectEnvironment variables specific to this scenario.{}
tagsobjectTags specific to this scenario.{}

Scenario example

This script combines two scenarios, with sequencing:

  • The shared_iter_scenario starts immediately. Ten VUs try to use 100 iterations as quickly as possible (some VUs may use more iterations than others).
  • The per_vu_scenario starts after 10s. In this case, ten VUs each run ten iterations.

Which scenario takes longer? You can run to discover. You can also add a maxDuration property to one or both scenarios.

import http from 'k6/http';

export const options = {
  scenarios: {
    shared_iter_scenario: {
      executor: 'shared-iterations',
      vus: 10,
      iterations: 100,
      startTime: '0s',
    per_vu_scenario: {
      executor: 'per-vu-iterations',
      vus: 10,
      iterations: 10,
      startTime: '10s',

export default function () {

If you run a script with scenarios, k6 output includes high-level information about each one. For example, if you run the preceding script, k6 run scenario-example.js, then k6 reports the scenarios as follows:

  execution: local
     script: scenario-example.js
     output: -

  scenarios: (100.00%) 2 scenarios, 20 max VUs, 10m40s max duration (incl. grace
ful stop):
           * shared_iter_scenario: 100 iterations shared among 10 VUs (maxDurati
on: 10m0s, gracefulStop: 30s)
           * per_vu_scenario: 10 iterations for each of 10 VUs (maxDuration: 10m
0s, startTime: 10s, gracefulStop: 30s)

The full output includes the summary metrics, like any default end-of-test summary: