Supercheck LogoSupercheck

Playground

Interactive test editor with AI assistanceEdit

The Playground is an interactive development environment for writing, running, and debugging tests. It combines a professional code editor with AI-powered assistance to help you create reliable tests faster.

Playground Editor

Editor Features

The Playground uses Monaco Editor (the same editor that powers VS Code) with features tailored for test development:

  • Syntax Highlighting — Full TypeScript/JavaScript support with Playwright API recognition
  • Auto-completion — IntelliSense for Playwright methods, selectors, and assertions
  • Error Detection — Real-time syntax validation before you run
  • Code Formatting — Automatic formatting for clean, readable code
  • Variable Access — Use getVariable() and getSecret() for project configuration

Running Tests

Click Run to execute your test in an isolated container environment:

  1. Your code runs in a secure Docker container
  2. Results stream back in real-time
  3. Screenshots, traces, and videos are captured automatically
  4. Reports are stored for 24 hours

Execution options:

  • Browser — Choose Chromium, Firefox, or WebKit
  • Timeout — Set maximum execution time (default: 2 minutes)
  • Region — Select execution location for latency testing

You can cancel a running test at any time by clicking the cancel button. The container is immediately stopped and resources are released.

AI Create

Generate complete, working test scripts from plain English descriptions. AI Create uses GPT-4o-mini to understand your requirements and produce well-structured Playwright code.

Beta Feature: AI-generated browser tests may require refinement. For best results, record your test first using Playwright's codegen, then use AI Create to enhance or modify it. Review all generated code before running.

AI Create Prompt

How to Use AI Create

  1. Click the AI Create button (purple gradient)
  2. Describe what you want to test in natural language
  3. Click Generate to start streaming the code
  4. Review the generated script in the diff viewer
  5. Click Accept to use the code, or Reject to discard

AI Create Result

Writing Effective Prompts

Good prompts are specific and include:

  • The user action you want to test
  • Expected outcomes or assertions
  • Any specific elements or data to interact with

Example prompts:

PromptWhat AI Generates
"Test login with email user@example.com and password test123, verify dashboard loads"Complete login flow with form filling, submission, and URL assertion
"Check that the /api/health endpoint returns 200 and includes status: ok"API test with fetch, status check, and JSON body validation
"Verify user can add a product to cart, view cart, and see correct total"Multi-step e-commerce flow with navigation and assertions
"Test that the contact form validates required fields and shows error messages"Form validation test with empty submission and error checking

What AI Create Produces

Generated tests include:

  • Proper imports (@playwright/test)
  • Descriptive test names and comments
  • Robust selectors (prefers data-testid, aria-labels)
  • Appropriate waits and assertions
  • Error handling where needed

AI Fix

When a test fails, AI Fix analyzes the error and suggests corrections. It examines the failure report, identifies the root cause, and generates a fixed version of your code.

AI Fix Button

How AI Fix Works

  1. Run a test that fails
  2. Click the AI Fix button (appears after failure)
  3. AI analyzes the error report and your code
  4. A diff viewer shows original vs. suggested fix
  5. Review changes, edit if needed, then Accept or Reject

AI Fix Result

What AI Fix Can Repair

AI Fix is effective for code-level issues that can be resolved by modifying your test:

Issue TypeExampleAI Fix Action
Selector IssuesElement not found, wrong selectorUpdates selector to match current DOM
Timing ProblemsElement not visible, timeoutAdds appropriate waits or increases timeout
Assertion FailuresExpected value mismatchCorrects assertion or expected value
Navigation ErrorsPage not loading, wrong URLFixes URL or adds navigation waits

When AI Fix Shows Guidance Instead

Some failures require manual investigation rather than code changes. In these cases, AI Fix displays a guidance modal with troubleshooting steps:

  • Network Issues — Server unreachable, DNS failures, HTTP 5xx errors
  • Authentication Failures — Invalid credentials, expired tokens, 401/403 errors
  • Infrastructure Problems — Database down, service unavailable
  • Permission Errors — Access denied, insufficient privileges

AI Fix preserves all your original comments and code structure. It only modifies the specific lines needed to fix the issue.

Templates

Start with pre-built templates for common testing scenarios. Templates provide working code that you can customize for your application.

Code Templates

Available Templates

Browser Test Templates:

CategoryTemplates
Browser FundamentalsUI Smoke (navigation), Browser selection with tags, Comprehensive Browser Test
Auth FlowsAuth flow (login + logout)
Responsive & DevicesMobile/responsive layout, Device emulation (geo, locale, timezone)

API Test Templates:

CategoryTemplates
API HealthHealth/JSON contract
API CRUDCreate + read + cleanup
AuthenticationAuthenticated request
API CoverageComprehensive API Test

Database Test Templates:

CategoryTemplates
Database ChecksSELECT health check, Safe UPDATE with RETURNING, Database Discovery & Query
TransactionsInsert with rollback

Custom Test Templates:

CategoryTemplates
Cross-layerCustom test fixtures, Form with API stubbing, Device emulation showcase, API + UI end-to-end, GitHub API + Browser Integration

Performance Test Templates (k6):

CategoryTemplates
Smoke & HealthSmoke Check (API), Basic Performance Test
Load ProfilesRamping Load, Advanced Thresholds
ResilienceSpike + Recovery, Stress Test, Breakpoint Test
ReliabilitySoak/Endurance
API CoverageAPI Checklist (GET + POST + auth), Checks & Assertions

Using Templates

  1. Click Templates in the editor toolbar
  2. Browse categories or search for specific scenarios
  3. Click a template to preview the code
  4. Click Use Template to insert into editor
  5. Customize URLs, selectors, and test data for your app

Test Reports

Every test run generates a detailed report with artifacts for debugging:

Playground Report

Report Contents

ArtifactDescriptionUse For
ScreenshotsCaptured at each step and on failureVisual verification, debugging UI issues
TraceInteractive step-by-step replayUnderstanding test flow, timing issues
VideoFull browser recordingSeeing exactly what happened
Console LogsBrowser console outputJavaScript errors, debug messages
NetworkAll HTTP requests and responsesAPI debugging, performance analysis

Viewing Traces

The Playwright Trace Viewer lets you:

  • Step through each action in your test
  • See the page state before and after each step
  • Inspect DOM elements and their properties
  • View network requests with timing
  • Debug timing and selector issues

Using Variables

Access project-level configuration and secrets in your tests:

/**
 * Variables and secrets for test configuration.
 * Variables are plain text, secrets are encrypted.
 * @see /docs/automate/variables for setup instructions
 */

// Regular variables (logged normally)
const baseUrl = getVariable('BASE_URL');
const timeout = getVariable('TIMEOUT');

// Secrets require .toString() to access the value
const apiKey = getSecret('API_KEY').toString();
const password = getSecret('DB_PASSWORD').toString();

// Using in Playwright
await page.goto(baseUrl);
await page.fill('#password', password);

Variables are resolved server-side before execution. Secrets are encrypted and automatically redacted from logs and screenshots.

See Variables for setup instructions.

Best Practices

Writing Reliable Tests

  • Use stable selectors — Prefer data-testid, aria-label, or semantic selectors over CSS classes
  • Add explicit waits — Use waitForSelector or waitForURL instead of fixed delays
  • Keep tests focused — Test one user flow per test for easier debugging
  • Use descriptive names — Name tests clearly so failures are easy to understand

Using AI Effectively

  • Start with AI Create for new tests, then refine manually
  • Use AI Fix for quick selector and timing fixes
  • Review AI suggestions before accepting—AI is helpful but not perfect
  • Provide context in prompts for better AI-generated code

Debugging Failures

  • Check screenshots first to see the visual state
  • Use trace viewer to step through the test
  • Review network tab for API issues
  • Check console logs for JavaScript errors

Learn More

For deeper understanding of the testing frameworks used in Supercheck: