---
title: "k6 browser check | Grafana Cloud documentation"
description: "k6 browser checks execute k6 tests in Grafana Synthetic Monitoring using a headless browser to monitor website performance and user flows."
---

> For a curated documentation index, see [llms.txt](/llms.txt). For the complete documentation index, see [llms-full.txt](/llms-full.txt).

# k6 browser check

k6 browser checks run a [k6 script](/docs/k6/latest/) using the [browser module](/docs/k6/latest/using-k6-browser/) to control a headless browser. Browser checks monitor web performance and can be used to continuously monitor critical user journeys and workflows.

With k6 browser checks, you can:

- Perform interactions on a web page like navigation, tap/click, filling forms, and scrolling.
- Leverage a Playwright-inspired API for authoring tests.
- Simultaneously make requests at the protocol-level as well as in the browser, to simulate user behavior and rich interactive scenarios.
- Set the window size to test mobile, tablet, and desktop views.

## How it works

A k6 check runs one iteration of a k6 test at short, frequent intervals to proactively monitor applications and services.

Synthetic Monitoring results are stored as Prometheus metrics and Loki logs, which allow you to [set up Grafana alerts](/docs/grafana-cloud/testing/synthetic-monitoring/configure-alerts/configure-default-alerts/) for custom notifications and incident management.

The k6 scripts in Synthetic Monitoring are [broadly compatible](#compatibility) with other k6 products, like the k6 CLI and Grafana Cloud Performance Testing. This allows you to reuse the same k6 script for [various use cases](/docs/k6/latest/#use-cases), enhancing testing productivity and coverage.

## Create a k6 browser check

You can write and execute a k6 script from the code editor in the Synthetics UI.

1. On the main menu, select **Testing &amp; synthetics** and then **Synthetics**.
2. Click **Create new check** or **Add new check**.
3. Choose **k6 browser** as your check type.
4. Set the value for the required [options](#options).
5. Edit or copy your k6 script under **Script**.
6. Schedule or run the check.
   
   1. Click **Save** to schedule the check.
   2. Click **Test** to run the k6 script.

For development and debugging, you can write and execute the k6 script from your local machine using the [k6 CLI](/docs/k6/latest/get-started/running-k6/) and your IDE. After completing the k6 script, create a new browser check or update the script of an existing one.

The Synthetics script editor includes a few scripts with basic examples. To get started implementing your first k6 browser tests, refer to the [Create a k6 browser check](/docs/grafana-cloud/testing/synthetic-monitoring/get-started/create-a-k6-browser-check/) tutorial.

## Options

The list of common options to all check types:

Expand table

| Option          | Description                                                                                                                                                                             |
|-----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Enabled         | Whether the check is enabled or not.                                                                                                                                                    |
| Job name        | Refer to the check name. Check metrics include a `job` label with the value of this option.                                                                                             |
| Target          | Target of the check request. Check metrics include an `instance` label with the value of this option.                                                                                   |
| Probe locations | The locations where the check should run from. Check metrics include a `probe` label with the value of the probe location running the check.                                            |
| Frequency       | The frequency the check should run in seconds. The value can range from 60 to 3600 seconds. Only the `sm_check_info` metric includes the `frequency` label.                             |
| Timeout         | Maximum execution time for the check. The value can range from 1 to 180 seconds.                                                                                                        |
| Custom labels   | (Optional) Custom labels applied to check metrics. Refer to [Custom labels](/docs/grafana-cloud/testing/synthetic-monitoring/analyze-results/custom-labels/) for querying instructions. |

Additionally, k6 scripted checks have the following options:

Expand table

| Option | Description                                                                                              |
|--------|----------------------------------------------------------------------------------------------------------|
| Script | The k6 script to execute periodically. For further details, refer to [k6 compatibility](#compatibility). |

### Required k6 imports and options

In order to run a browser check, the script must:

- Import the browser module
- Include the browser type “chromium” in the k6 options block.

JavaScript ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```javascript
import { browser } from 'k6/browser';

export const options = {
  scenarios: {
    ui: {
      options: {
        browser: {
          type: 'chromium',
        },
      },
    },
  },
};
```

### Supported k6 Options

[k6 options](/docs/k6/latest/using-k6/k6-options/) can be used to configure a wide range of features when running the k6 script. In Synthetic Monitoring, k6 options can only be set in the script `options` object:

JavaScript ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```javascript
export const options = {
  tags: { foo: 'bar' },
  userAgent: 'MyK6UserAgentString/1.0',
};
```

Several k6 options don’t apply in the context of Synthetic Monitoring. Here is the list of supported options:

Expand table

| k6 Option                                                                                        | Description                                                                                           |
|--------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------|
| [`batch`](/docs/k6/next/using-k6/k6-options/reference/#batch)                                    | Max number of simultaneous connections of a `http.batch()` call                                       |
| [`batch-per-host`](/docs/k6/next/using-k6/k6-options/reference/#batch-per-host)                  | Max number of simultaneous connections of a `http.batch()` call for a host                            |
| [`discardResponseBodies`](/docs/k6/next/using-k6/k6-options/reference/#discard-response-bodies)  | Specify whether response bodies should be discarded                                                   |
| [`httpDebug`](/docs/k6/next/using-k6/k6-options/reference/#http-debug)                           | log all HTTP requests and responses                                                                   |
| [`insecureSkipTLSVerify`](/docs/k6/next/using-k6/k6-options/reference/#insecure-skip-tls-verify) | A boolean specifying whether k6 should ignore TLS verifications for connections established from code |
| [`maxRedirects`](/docs/k6/next/using-k6/k6-options/reference/#max-redirects)                     | The maximum number of HTTP redirects that k6 will follow                                              |
| [`noConnectionReuse`](/docs/k6/next/using-k6/k6-options/reference/#no-connection-reuse)          | A boolean specifying whether k6 should disable keep-alive connections                                 |
| [`setupTimeout`](/docs/k6/next/using-k6/k6-options/reference/#setup-timeout)                     | Specify how long the `setup()` function is allow to run before it’s terminated                        |
| [`systemTags`](/docs/k6/latest/using-k6/k6-options/reference/#system-tags)                       | Specify which System Tags will be in the collected metrics                                            |
| [`tags`](/docs/k6/latest/using-k6/k6-options/reference/#tags)                                    | Specify tags that should be set test-wide across all metrics                                          |
| [`teardownTimeout`](/docs/k6/latest/using-k6/k6-options/reference/#teardown-timeout)             | Specify how long the teardown() function is allowed to run before it’s terminated                     |
| [`throw`](/docs/k6/latest/using-k6/k6-options/reference/#throw)                                  | A boolean specifying whether to throw errors on failed HTTP requests                                  |
| [`tlsAuth`](/docs/k6/latest/using-k6/k6-options/reference/#tls-auth)                             | A list of TLS client certificate configuration objects                                                |
| [`tlsCipherSuites`](/docs/k6/latest/using-k6/k6-options/reference/#tls-cipher-suites)            | A list of cipher suites allowed to be used by in SSL/TLS interactions with a server                   |
| [`tlsVersion`](/docs/k6/latest/using-k6/k6-options/reference/#tls-version)                       | String or object representing the only SSL/TLS version allowed                                        |
| [`userAgent`](/docs/k6/latest/using-k6/k6-options/reference/#user-agent)                         | A string specifying the User-Agent header when sending HTTP requests                                  |

## Metrics

Synthetic checks store their results as Prometheus metrics, including the list of common metrics to all check types:

Expand table

| Metric                       | Description                                                         |
|------------------------------|---------------------------------------------------------------------|
| `probe_all_duration_seconds` | Returns how long the probe took to complete in seconds (histogram). |
| `probe_duration_seconds`     | Returns how long the probe took to complete in seconds.             |
| `probe_all_success`          | Displays whether or not the probe was a success (summary).          |
| `probe_success`              | Displays whether or not the probe was a success.                    |
| `sm_check_info`              | Provides information about a single check configuration.            |

### k6 metrics

Browser checks also collect the metrics produced by k6 and store them as Prometheus metrics.

k6 has two types of metrics:

- [Built-in metrics](/docs/k6/latest/using-k6/metrics/reference/): These are metrics collected by every k6 test, such as data received and total number of requests. In addition, browser checks collect [Web Vitals metrics](/docs/k6/next/using-k6-browser/metrics/).
- [Custom metrics](/docs/k6/latest/using-k6/metrics/create-custom-metrics/): These are metrics that you can create in your test script to measure anything specific to your system or business logic. In Prometheus, they’re renamed to `probe_K6_METRIC_NAME` and mapped to [Prometheus gauges](https://prometheus.io/docs/concepts/metric_types/#gauge).

k6 built-in metrics for browser checks are transformed in Synthetic Monitoring as follows:

Expand table

| Metric                             | Description                                                                                                                                                                                                                                                                                   |
|------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `probe_browser_data_received`      | The total data received for all requests made in executing the script. It corresponds to the [`data_received` metric](/docs/k6/latest/using-k6/metrics/reference/).                                                                                                                           |
| `probe_browser_data_sent`          | The total data sent for all requests made in executing the script. It corresponds to the [`data_sent` metric](/docs/k6/latest/using-k6/metrics/reference/).                                                                                                                                   |
| `probe_browser_http_req_duration`  | Duration of HTTP request by phase, in seconds. This is the combined total of the `http_req_blocked`, `http_req_connecting`, `http_req_receiving`, `http_req_sending`, `http_req_tls_handshaking`, and `http_req_waiting` [k6 HTTP metrics](/docs/k6/latest/using-k6/metrics/reference/#http). |
| `probe_browser_http_req_failed`    | The number of failed HTTP requests while executing the script.                                                                                                                                                                                                                                |
| `probe_browser_web_vital_cls`      | Measures the visual stability on a webpage by quantifying the amount of unexpected layout shift of visible page content. Refer to [Cumulative Layout Shift](https://web.dev/cls/) for more information.                                                                                       |
| `probe_browser_web_vital_fcp`      | Measures the time it takes for the browser to render the first DOM element on the page, whether that’s a text, image or header. Refer to [First Contentful Paint](https://web.dev/fcp/) for more information.                                                                                 |
| `probe_browser_web_vital_fid`      | Measures the responsiveness of a web page by quantifying the delay between a user’s first interaction, such as clicking a button, and the browser’s response. Refer to [First Input Delay](https://web.dev/fid/) for more information.                                                        |
| `probe_browser_web_vital_inp`      | An experimental metric that measures a page’s responsiveness. Refer to [Interaction to Next Paint](https://web.dev/inp/) for more information.                                                                                                                                                |
| `probe_browser_web_vital_lcp`      | Measures the time it takes for the largest content element on a page to become visible. Refer to [Largest Contentful Paint](https://web.dev/lcp/) for more information.                                                                                                                       |
| `probe_browser_web_vital_ttfb`     | Measures the time it takes between the browser request and the start of the response from a server. Refer to [Time to First Byte](https://web.dev/ttfb/) for more information.                                                                                                                |
| `probe_check_success_rate`         | The pass/fail rate of assertions (calls to [`check()`](/docs/k6/next/javascript-api/k6/check/)). The `check` label identifies the assertion by name. The value can range between 0 (all failed) and 1 (all passed).                                                                           |
| `probe_checks_total`               | The number of passing/failing assertions (calls to [`check()`](/docs/k6/next/javascript-api/k6/check/)). The `check` label identifies the assertion by name. The `result` label can have a value of `pass` or `fail`.                                                                         |
| `probe_iteration_duration_seconds` | Returns the amount of time it took the script to execute, in seconds. It corresponds to the [`iteration_duration` metric](/docs/k6/latest/using-k6/metrics/reference/).                                                                                                                       |

### Visualization

You can query all the produced check metrics with [Grafana Explore](/docs/grafana/latest/explore/get-started-with-explore/), where you can create custom panels and add them to your dashboards.

Additionally, each check includes a dashboard displaying the results of the most relevant metrics. To learn more about the various visualization options, refer to [Analyze results](/docs/grafana-cloud/testing/synthetic-monitoring/analyze-results/).

## Compatibility

Grafana Cloud Synthetic Monitoring has the following limitations for k6 scripts, compared to running them locally or using Grafana Cloud k6:

### Workload options

Each k6 check runs only one iteration of the test. The system ignores options such as `vus`, `duration`, `stages`, and `iterations`.

### Timeout

k6 checks have a maximum execution time. Configure this limit with the [`Timeout` option](#options) in the UI.

### k6 CLI

You can’t use the k6 CLI to run or upload k6 scripts to Grafana Cloud Synthetic Monitoring.

### Other k6 APIs

[Thresholds](/docs/k6/latest/using-k6/thresholds/) aren’t supported.

### Labels

[Custom labels](/docs/grafana-cloud/testing/synthetic-monitoring/analyze-results/custom-labels/) that you define in the UI aren’t included in k6 metrics. Non-k6 metrics don’t include labels that you define in the [k6 `tags` option](/docs/k6/latest/using-k6/k6-options/reference/#tags).

### Load local files

For security reasons, probes can’t load local files. k6 APIs such as [`open`](/docs/k6/latest/javascript-api/init-context/open/), [`fs`](/docs/k6/latest/javascript-api/k6-experimental/fs/), and [`grpc.load`](/docs/k6/latest/javascript-api/k6-net-grpc/client/client-load/) (non-reflection) aren’t supported.

To import local modules or libraries, use a bundler to build your k6 script with its dependencies locally. However, note that bundled scripts have [additional limitations](#bundled-or-minified-scripts) for browser checks.

## Limitations

### Bundled or minified scripts

The Synthetic Monitoring UI validates scripts by checking for required imports and options. This validation fails with webpack-compiled, bundled, or minified scripts because the code structure is obfuscated.

To create browser checks with bundled scripts, use [Terraform](https://registry.terraform.io/providers/grafana/grafana/latest/docs/resources/synthetic_monitoring_check) or the [Synthetic Monitoring API](/docs/grafana-cloud/testing/synthetic-monitoring/api-reference/).

### Public probe memory

Browsers on public probes are allocated 1GB of RAM when running tests, which may be insufficient for large web pages. If a page is too large to load, the test execution fails and logs the message: `Unexpected DevTools server error: Target has crashed`.
