Skip to main content

Prerequisites

First, go to the test suites page to create a test suite that the test will belong to.

Creating a Test

To get started with writing a test on Docket, you should:
  1. Navigate to the tests tab on the left
  2. Click “Create New Test”
  3. Enter a title, select a test suite, and provide an entrypoint URL (the starting point for your test)
  4. Click “Launch Browser”
Screenshot 2025-07-17 at 4.03.12 PM.png The URL you provide will be where the Docket Browser agent automatically navigates to begin executing your test.
Learn more: For detailed information about the persistent browser instance and its controls, see the Remote Browser documentation.

Building Your Test

To create tests on Docket, you can simply interact in the remote browser and Docket will record actions alongside English annotations automatically. There are four types of steps you can write on Docket:
  1. Cached Steps - Precise coordinate-based actions that replay exactly as recorded and self-heal when the UI changes
  2. AI Steps - Natural language instructions that adapt to your UI dynamically
  3. API Steps - Network requests to endpoints for seeding, setup, and teardown of test resources
  4. Modules - Reusable collections of steps that can be shared across multiple tests
Learn more about each step type to understand when to use them.

API Steps

API steps allow you to make network requests to any endpoint during your test execution. They’re incredibly powerful for creating dynamic, data-driven tests that can fully manage test resources throughout their lifecycle. Common use cases:
  • Test Setup: Creating test data, user accounts, or resources before the main test flow
  • Test Seeding: Populating databases or systems with necessary data
  • Dynamic CRUD Operations: Create resources, store their IDs, then update or delete them later in the test
  • Test Teardown: Cleaning up resources after test completion
API steps support all HTTP methods (GET, POST, PUT, DELETE, PATCH, etc.) and pass when they receive a 2xx response status code.
Extracting Response Data
API steps can extract data from responses and store them in variables using JSON Path expressions. This enables powerful workflows where you create resources and use their IDs in subsequent steps. API Step Configuration In the example above, we’re extracting a resource_id from a nested API response. Notice how the request also uses @bearer_token in the Authorization header—this token was likely extracted from a previous authentication API call, demonstrating how you can chain multiple API steps together. Response structure:
{
  "response": {
    "data": [
      {"resource_id": 123, "other_data": "..."},
      {"resource_id": 456, "other_data": "..."}
    ]
  }
}
JSON Path: response.data[0].resource_id
Result: Extracts 123 and stores it in the @resource_id variable
Dynamic CRUD Example
API steps become extremely powerful when you combine variable extraction with multiple requests:
  1. Create a product via POST and extract its ID to @product_id
  2. Update the product via PUT to https://api.example.com/products/@product_id
  3. Use the product in your UI test flow
  4. Delete the product via DELETE to https://api.example.com/products/@product_id for cleanup
This pattern allows you to create fully isolated, repeatable tests that manage their own test data dynamically without polluting your database or depending on pre-existing resources.

Modules

Modules are reusable collections of steps that can be created once and inserted into multiple tests. They’re perfect for common workflows like logging in, navigating to a specific page, or performing setup tasks that are repeated across many tests. Key features:
  • Create and manage modules in the “Modules” section of Docket
  • Compose with any step type: Modules can contain AI steps, cached steps, and API steps
  • Cannot nest: Modules cannot reference other modules
  • CRUD interface: The module creation interface is similar to test creation, with a remote browser for recording interactions
Variables in Modules
Modules have specific rules around variable usage:
  • Can reference variables: Modules can use variables (like @user_email or @auth_token) that are defined in the tests where the module is used
  • Can extract from API calls: Modules can create new variables by extracting values from API response bodies within the module
  • Cannot define variables: Modules cannot create their own text, email, or random alphanumeric variables
  • No file uploads: Modules do not support file management
This design keeps modules focused and portable—they rely on the parent test for variable definitions and file resources while still being able to extract and pass data through API calls. Common use cases:
  • Standard navigation sequences
  • Common setup or teardown operations
  • Shared API authentication workflows
  • Repeated multi-step interactions

Lifecycle Hooks

Lifecycle hooks allow you to attach optional modules that run automatically at specific points in your test execution. Configure these in the “Lifecycle Hooks” tab of your test. Setup Hook
Runs after any account login (if configured) and before the first test instruction. Perfect for creating test resources via API steps and storing their IDs in variables.
Teardown Hook
Runs after the test completes, regardless of whether it passed or failed. Ideal for cleaning up resources created during the test.
The most powerful use of lifecycle hooks is pairing setup and teardown modules with API steps:
  1. Setup module: Create resources via API POST requests, extract their IDs to variables (e.g., @test_user_id, @order_id)
  2. Test: Use these resources in your UI test flow by referencing the variables
  3. Teardown module: Delete the resources via API DELETE requests using the stored IDs
This pattern ensures each test run starts with a clean environment and doesn’t leave behind test data, making your tests fully isolated and repeatable.

Test Variables

Variables can be created in the Variables tab and referenced in test instructions using @variable_name syntax. Test-level variables take precedence over suite-level variables. Screenshot 2025-07-17 at 4.18.22 PM.png

Text Variables

Static values that remain constant throughout test execution. Ideal for credentials, product names, or configuration values.

Dynamic Variables

Generate pseudorandom strings with configurable length and character sets. Perfect for unique usernames, product IDs, or any data requiring uniqueness.

Email Variables

Choose a plus address configured in your Email Settings, or generate a unique random address per test run. The agent can monitor these addresses for incoming emails—perfect for verification codes, magic links, and OTPs.

Extracted Variables

Created dynamically by capturing data from API responses using JSON Path (from response body) or directly from response headers. Use these to store auth tokens, resource IDs, or session identifiers. See API Steps for examples.

Advanced Test Configuration

Configure additional settings to fine-tune your test execution behavior. Advanced Test Configuration

Browser Zoom Level

Control the browser’s zoom level (equivalent to Ctrl +/- in Chromium). Useful for testing responsive design and accessibility. Options: 50%, 75%, 100% (default), 125%, 150%, 200%

Test Retries

Configure automatic retries when a test fails. Docket will re-run the entire test from the beginning. Options: No retries (default), 1-3 retries

Test Status

  • Active (default): Test runs in all scenarios
  • Paused: Test is excluded from scheduled runs, CI/CD, and group executions

Two-Factor Authentication (2FA)

Store TOTP secrets for automated 2FA during testing. Enter the secret key from your authenticator app setup, and Docket will automatically generate codes when prompted.