Supercheck LogoSupercheck

Multi-Location

Deploy workers in multiple geographic regions for true global coverageEdit

Deploy workers across multiple regions for geographic test execution and monitoring coverage.

Prerequisite β€” Complete the Single-Location setup first. Multi-location builds on top of an existing deployment by adding remote workers.

Architecture

Loading diagram...

Each server runs its own local K3s cluster with gVisor. Workers consume jobs from Redis via BullMQ and execute each test as an ephemeral Kubernetes Job in a sandboxed supercheck-execution namespace. Remote workers connect to the primary server's Redis, PostgreSQL, and MinIO over the network.


Queue Routing

Queue names are dynamically created from enabled locations in Super Admin.

WORKER_LOCATIONQueues ProcessedUse Case
localAll queues (regional + global)Single-server / development
us-eastplaywright-global, k6-us-east, k6-global, monitor-us-eastUS East worker
eu-centralplaywright-global, k6-eu-central, k6-global, monitor-eu-centralEU Central worker

Routing rules: Playwright jobs go to playwright-global (any worker picks them up). K6 jobs target a single resolved location queue plus k6-global. Monitors run only on their configured location queues.


Managing Locations

Locations are managed in Super Admin β†’ Locations.

FieldRequiredDescription
CodeYesUnique identifier for queue names (e.g. us-east). Lowercase, 2–50 chars, letters/digits/hyphens. Reserved codes blocked.
NameYesDisplay name (e.g. "US East")
RegionNoGeographic description (e.g. "Ashburn, Virginia")
FlagNoEmoji flag for UI (e.g. πŸ‡ΊπŸ‡Έ)
CoordinatesNoLat/lng for map visualization
DefaultNoDefault location for K6 jobs. Only one at a time.
EnabledNoActive status. Disabling removes queues.

Status Indicators

StatusMeaning
ActiveEnabled, workers connected (heartbeat detected)
OfflineEnabled, no workers currently connected
DisabledToggled off β€” no queues created

Offline locations stay visible in the UI to prevent transient outages from silently removing regions from monitor configs. Monitors degrade gracefully to the online subset. K6 jobs remain queued until a matching worker comes online.

Project Location Restrictions

Org admins can restrict which locations a project may use via Settings β†’ Admin β†’ Project Locations.


Setup

Configure Main Server

Update your main server's .env to set a specific region instead of local:

WORKER_LOCATION=eu-central

Restart using the same Compose file your main server already uses:

# Quick Start (HTTP)
docker compose down && \
KUBECONFIG_FILE=/etc/rancher/k3s/supercheck-worker.kubeconfig \
docker compose up -d

# Production (HTTPS)
docker compose -f docker-compose-secure.yml down && \
KUBECONFIG_FILE=/etc/rancher/k3s/supercheck-worker.kubeconfig \
docker compose -f docker-compose-secure.yml up -d

Expose Services

Expose database ports for remote worker access in your Docker Compose file:

services:
  postgres:
    ports:
      - "5432:5432"
  redis:
    ports:
      - "6379:6379"
  minio:
    ports:
      - "9000:9000"

Security: Exposing database ports requires proper network security. Use VPN (WireGuard/Tailscale), firewall rules, or encrypted tunnels. See Security below.

Deploy Remote Workers

On each remote VPS:

1. Install Docker and execution sandbox:

Each remote worker requires a Linux server (Ubuntu 22.04+, Debian 12+) with Docker and the execution sandbox installed. macOS, Windows, and WSL2 are not supported.

curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER && newgrp docker

# Execution sandbox (K3s + gVisor)
curl -fsSL -o setup-k3s.sh https://raw.githubusercontent.com/supercheck-io/supercheck/main/deploy/docker/setup-k3s.sh
sudo bash setup-k3s.sh

2. Download worker compose:

mkdir -p ~/supercheck-worker && cd ~/supercheck-worker
curl -o docker-compose-worker.yml https://raw.githubusercontent.com/supercheck-io/supercheck/main/deploy/docker/docker-compose-worker.yml

3. Create .env:

# Worker location (must match an enabled location code in Super Admin)
WORKER_LOCATION=us-east

# Database
DATABASE_URL=postgresql://postgres:YOUR_DB_PASSWORD@MAIN_SERVER_IP:5432/supercheck

# Redis
REDIS_HOST=MAIN_SERVER_IP
REDIS_PORT=6379
REDIS_PASSWORD=YOUR_REDIS_PASSWORD

# S3/MinIO
S3_ENDPOINT=http://MAIN_SERVER_IP:9000
AWS_ACCESS_KEY_ID=YOUR_MINIO_ACCESS_KEY
AWS_SECRET_ACCESS_KEY=YOUR_MINIO_SECRET_KEY

Find credentials in your main server's .env at supercheck/deploy/docker/.env.

4. Start:

KUBECONFIG_FILE=/etc/rancher/k3s/supercheck-worker.kubeconfig docker compose -f docker-compose-worker.yml up -d
docker compose -f docker-compose-worker.yml logs -f  # Verify connection

Complete 3-Region Example

ServerRegionWORKER_LOCATIONQueues Processed
Main ServerEuropeeu-centralplaywright-global, k6-eu-central, k6-global, monitor-eu-central
Remote VPS 1USus-eastplaywright-global, k6-us-east, k6-global, monitor-us-east
Remote VPS 2Asiaasia-pacificplaywright-global, k6-asia-pacific, k6-global, monitor-asia-pacific

Main Server .env:

WORKER_LOCATION=eu-central

US VPS .env:

# Worker location
WORKER_LOCATION=us-east

# Connection to main server (replace with your values)
DATABASE_URL=postgresql://postgres:YOUR_DB_PASSWORD@YOUR_MAIN_SERVER_IP:5432/supercheck
REDIS_HOST=YOUR_MAIN_SERVER_IP
REDIS_PORT=6379
REDIS_PASSWORD=YOUR_REDIS_PASSWORD
S3_ENDPOINT=http://YOUR_MAIN_SERVER_IP:9000
AWS_ACCESS_KEY_ID=YOUR_MINIO_ACCESS_KEY
AWS_SECRET_ACCESS_KEY=YOUR_MINIO_SECRET_KEY

APAC VPS .env:

# Worker location
WORKER_LOCATION=asia-pacific

# Connection to main server (replace with your values)
DATABASE_URL=postgresql://postgres:YOUR_DB_PASSWORD@YOUR_MAIN_SERVER_IP:5432/supercheck
REDIS_HOST=YOUR_MAIN_SERVER_IP
REDIS_PORT=6379
REDIS_PASSWORD=YOUR_REDIS_PASSWORD
S3_ENDPOINT=http://YOUR_MAIN_SERVER_IP:9000
AWS_ACCESS_KEY_ID=YOUR_MINIO_ACCESS_KEY
AWS_SECRET_ACCESS_KEY=YOUR_MINIO_SECRET_KEY

Scaling Workers

  • WORKER_REPLICAS: Controls the number of worker containers. Set individually on each worker server.
  • RUNNING_CAPACITY: Maximum concurrent test runs in running state. Set on the app server equal to total worker replicas across all servers.
  • QUEUED_CAPACITY: Maximum queued test runs before rejecting submissions. Set on the app server based on your desired queue length.
# Main server: total of 3 workers across all servers (1 + 1 + 1)
RUNNING_CAPACITY=3 QUEUED_CAPACITY=30 \
KUBECONFIG_FILE=/etc/rancher/k3s/supercheck-worker.kubeconfig \
docker compose -f docker-compose-secure.yml up -d

# Remote worker server: scale replicas on that server only
WORKER_REPLICAS=1 \
KUBECONFIG_FILE=/etc/rancher/k3s/supercheck-worker.kubeconfig \
docker compose -f docker-compose-worker.yml up -d
ServerRoleSetting
Main server (App + Worker)App + 1 workerRUNNING_CAPACITY=3 (total), WORKER_REPLICAS=1
Remote VPS 1Worker onlyWORKER_REPLICAS=1 (no RUNNING_CAPACITY needed)
Remote VPS 2Worker onlyWORKER_REPLICAS=1 (no RUNNING_CAPACITY needed)

Security

Important: Multi-location deployments require exposing database ports (PostgreSQL, Redis, MinIO) over the network. You are responsible for securing these connections using appropriate measures such as:

  • VPN (WireGuard, Tailscale)
  • Firewall rules (UFW, iptables, Cloud Security Groups)
  • Encrypted tunnels

Detailed network security configuration is beyond the scope of this documentation. Please consult your infrastructure provider's security best practices.


Troubleshooting

IssueDiagnostic Command
Worker can't reach DBdocker run --rm postgres:18 psql "$DATABASE_URL" -c "SELECT 1"
Worker can't reach Redisdocker run --rm redis:8 redis-cli -h MAIN_SERVER_IP -a PASSWORD ping
Verify worker locationdocker compose logs worker | grep WORKER_LOCATION
K6 jobs stuck queuedEnsure a worker has the matching WORKER_LOCATION running
"No monitor queues available"Verify workers are running in the monitor's configured locations

On this page