Menu
Open source

Custom summary

With handleSummary(), you can completely customize your end-of-test summary. In this document, read about:

  • How handleSummary() works
  • How to customize the content and output location of your summary
  • The data structure of the summary object

Note

However, we plan to support the feature for k6 Cloud tests, too. Track progress in this issue.

About handleSummary()

After your test runs, k6 aggregates your metrics into a JavaScript object. The handleSummary() function takes this object as an argument (called data in all examples here).

You can use handleSummary() to create a custom summary or return the default summary object. To get an idea of what the data looks like, run this script and open the output file, summary.json.

JavaScript
import http from 'k6/http';

export default function () {
  http.get('https://test.k6.io');
}

export function handleSummary(data) {
  return {
    'summary.json': JSON.stringify(data), //the default data object
  };
}

Fundamentally, handleSummary() is just a function that can access a data object. As such, you can transform the summary data into any text format: JSON, HTML, console, XML, and so on. You can pipe your custom summary to standard output or standard error, write it to a file, or send it to a remote server.

k6 calls handleSummary() at the end of the test lifecycle.

Use handleSummary()

The following sections go over the handleSummary() syntax and provide some examples.

To look up the structure of the summary object, refer to the reference section.

Syntax

k6 expects handleSummary() to return a {key1: value1, key2: value2, ...} map that represents the summary metrics.

The keys must be strings. They determine where k6 displays or saves the content:

  • stdout for standard output
  • stderr for standard error,
  • any relative or absolute path to a file on the system (this operation overwrites existing files)

The value of a key can have a type of either string or ArrayBuffer.

You can return multiple summary outputs in a script. As an example, this return statement sends a report to standard output and writes the data object to a JSON file.

Example: Extract data properties

This minimal handleSummary() extracts the median value for the iteration_duration metric and prints it to standard output:

JavaScript
import http from 'k6/http';

export default function () {
  http.get('https://test.k6.io');
}

export function handleSummary(data) {
  const med_latency = data.metrics.iteration_duration.values.med;
  const latency_message = `The median latency was ${med_latency}\n`;

  return {
    stdout: latency_message,
  };
}

Example: Modify default output

If handleSummary() is exported, k6 does not print the default summary. However, if you want to keep the default output, you could import textSummary from the K6 JS utilities library. For example, you could write a custom HTML report to a file, and use the textSummary() function to print the default report to the console.

You can also use textSummary() to make minor modifications to the default end-of-test summary. To do so:

  1. Modify the data object however you want.
  2. In your return statement, pass the modified object as an argument to the textSummary() function.

The textSummary() function comes with a few options:

OptionDescription
indentHow to start the summary indentation
enableColorWhether to print the summary in color.

For example, this handleSummary() modifies the default summary in the following ways:

  • It deletes the http_req_duration{expected_response:true} sub-metric.
  • It deletes all metrics whose key starts with iteration.
  • It begins each line with the character.
JavaScript
import http from 'k6/http';
import { textSummary } from 'https://jslib.k6.io/k6-summary/0.0.2/index.js';

export default function () {
  http.get('https://test.k6.io');
}

export function handleSummary(data) {
  delete data.metrics['http_req_duration{expected_response:true}'];

  for (const key in data.metrics) {
    if (key.startsWith('iteration')) delete data.metrics[key];
  }

  return {
    stdout: textSummary(data, { indent: '→', enableColors: true }),
  };
}

In the collapsible, you can use the tabs to compare default and modified reports.

Example: Make custom file format

This script imports a helper function to turn the summary into a JUnit XML. The output is a short XML file that reports whether the test thresholds failed.

JavaScript
import http from 'k6/http';

// Use example functions to generate data
import { jUnit } from 'https://jslib.k6.io/k6-summary/0.0.2/index.js';
import k6example from 'https://raw.githubusercontent.com/grafana/k6/master/examples/thresholds_readme_example.js';

export default k6example;
export const options = {
  vus: 5,
  iterations: 10,
  thresholds: {
    http_req_duration: ['p(95)<200'], // 95% of requests should be below 200ms
  },
};

export function handleSummary(data) {
  console.log('Preparing the end-of-test summary...');

  return {
    'junit.xml': jUnit(data), // Transform summary and save it as a JUnit XML...
  };
}

Output for a test that crosses a threshold looks something like this:

xml
<?xml version="1.0"?>
<testsuites tests="1" failures="1">
<testsuite name="k6 thresholds" tests="1" failures="1"><testcase name="http_req_duration - p(95)&lt;200"><failure message="failed" /></testcase>
</testsuite >
</testsuites >

Example: Send data to remote server

You can also send the generated reports to a remote server (over any protocol that k6 supports).

JavaScript
import http from 'k6/http';

// use example function to generate data
import k6example from 'https://raw.githubusercontent.com/grafana/k6/master/examples/thresholds_readme_example.js';
export const options = { vus: 5, iterations: 10 };

export function handleSummary(data) {
  console.log('Preparing the end-of-test summary...');

  // Send the results to some remote server or trigger a hook
  const resp = http.post('https://httpbin.test.k6.io/anything', JSON.stringify(data));
  if (resp.status != 200) {
    console.error('Could not send summary, got status ' + resp.status);
  }
}

Note

The last examples use imported helper functions. These functions might change, so keep an eye on jslib.k6.io for the latest.

Of course, we always welcome PRs to the jslib, too!

Summary data reference

Summary data includes information about your test run time and all built-in and custom metrics (including checks).

All metrics are in a top-level metrics object. In this object, each metric has an object whose key is the name of the metric. For example, if your handleSummary() argument is called data, the function can access the object about the http_req_duration metric at data.metrics.http_req_duration.

Metric schema

The following table describes the schema for the metrics object. The specific values depend on the metric type:

PropertyDescription
typeString that gives the metric type
containsString that describes the data
valuesObject with the summary metric values (properties differ for each metric type)
thresholdsObject with info about the thresholds for the metric (if applicable)
thresholds.{name}Name of threshold (object)
thresholds.{name}.okWhether threshold was crossed (boolean)

Note

If you change the default trend metrics with the summaryTrendStats option, the keys for the values of the trend will change accordingly.

Example summary JSON

To see what the summary data looks like in your specific test run:

  1. Add this to your handleSummary() function:

    return { 'raw-data.json': JSON.stringify(data)};`
  2. Inspect the resulting raw-data.json file.

    The following is an abridged example of how it might look:

json
{
  "root_group": {
    "path": "",
    "groups": [
      // Sub-groups of the root group...
    ],
    "checks": [
      {
        "passes": 10,
        "fails": 0,
        "name": "check name",
        "path": "::check name"
      }
      // More checks...
    ],
    "name": ""
  },
  "options": {
    // Some of the global options of the k6 test run,
    // Currently only summaryTimeUnit and summaryTrendStats
  },

  "state": {
    "testRunDurationMs": 30898.965069
    // And information about TTY checkers
  },

  "metrics": {
    // A map with metric and sub-metric names as the keys and objects with
    // details for the metric. These objects contain the following keys:
    //  - type: describes the metric type, e.g. counter, rate, gauge, trend
    //  - contains: what is the type of data, e.g. time, default, data
    //  - values: the specific metric values, depends on the metric type
    //  - thresholds: any thresholds defined for the metric or sub-metric
    //
    "http_reqs": {
      "type": "counter",
      "contains": "default",
      "values": {
        "count": 40,
        "rate": 19.768856959496336
      }
    },
    "vus": {
      "type": "gauge",
      "contains": "default",
      "values": {
        "value": 1,
        "min": 1,
        "max": 5
      }
    },
    "http_req_duration": {
      "type": "trend",
      "contains": "time",
      "values": {
        // actual keys depend depend on summaryTrendStats

        "avg": 268.31137452500013,
        "max": 846.198634,
        "p(99.99)": 846.1969478817999
        // ...
      },
      "thresholds": {
        "p(95)<500": {
          "ok": false
        }
      }
    },
    "http_req_duration{staticAsset:yes}": {
      // sub-metric from threshold
      "contains": "time",
      "values": {
        // actual keys depend on summaryTrendStats
        "min": 135.092841,
        "avg": 283.67766343333335,
        "max": 846.198634,
        "p(99.99)": 846.1973802197999
        // ...
      },
      "thresholds": {
        "p(99)<250": {
          "ok": false
        }
      },
      "type": "trend"
    }
    // ...
  }
}

Custom output examples

These examples are community contributions. We thank everyone who has shared!