Scenarios configure how VUs and iteration schedules in granular detail.
With scenarios, you can model diverse workloads, or traffic patterns in load tests.
Benefits of using scenarios include:
Easier, more flexible test organization. You can declare multiple scenarios in the same script,
and each one can independently execute a different JavaScript function.
Simulate more realistic traffic.
Every scenario can use a distinct VU and iteration scheduling pattern,
powered by a purpose-built executor.
Parallel or sequential workloads. Scenarios are independent from each other and run in parallel, though they can be made to appear sequential by setting the startTime property of each carefully.
Granular results analysis. Different environment variables and metric tags can be set per scenario.
Configure scenarios
To configure scenarios, use the scenarios key in the options object.
You can give the scenario any name, as long as each scenario name in the script is unique.
The scenario name appears in the result summary, tags, and so on.
JavaScript
exportconst options ={scenarios:{example_scenario:{// name of the executor to useexecutor:'shared-iterations',// common scenario configurationstartTime:'10s',gracefulStop:'5s',env:{EXAMPLEVAR:'testing'},tags:{example_tag:'testing'},// executor-specific configurationvus:10,iterations:200,maxDuration:'10s',},another_scenario:{/*...*/},},};
Scenario executors
For each k6 scenario, the VU workload is scheduled by an executor.
Executors configure how long the test runs, whether traffic stays constant or changes, and whether the workload is modeled by VUs or by arrival rate (that is, open or closed models).
Your scenario object must define the executor property with one of the predefined executor names.
Your choice of executor determines how k6 models load.
Choices include:
If you run a script with scenarios, k6 output includes high-level information about each one.
For example, if you run the preceding script, k6 run scenario-example.js,
then k6 reports the scenarios as follows:
bash
execution: local
script: scenario-example.js
output: -
scenarios: (100.00%)2 scenarios, 20 max VUs, 10m40s max duration (incl. grace
ful stop):
* shared_iter_scenario: 100 iterations shared among 10 VUs (maxDurati
on: 10m0s, gracefulStop: 30s)
* per_vu_scenario: 10 iterations for each of 10 VUs (maxDuration: 10m
0s, startTime: 10s, gracefulStop: 30s)
The full output includes the summary metrics, like any default end-of-test summary: