Multi-Location
Deploy workers in multiple geographic regions for true global coverageEdit
Deploy workers across multiple regions for geographic test execution and monitoring coverage.
Prerequisite β Complete the Single-Location setup first. Multi-location builds on top of an existing deployment by adding remote workers.
Architecture
Each server runs its own local K3s cluster with gVisor. Workers consume jobs from Redis via BullMQ and execute each test as an ephemeral Kubernetes Job in a sandboxed supercheck-execution namespace. Remote workers connect to the primary server's Redis, PostgreSQL, and MinIO over the network.
Queue Routing
Queue names are dynamically created from enabled locations in Super Admin.
WORKER_LOCATION | Queues Processed | Use Case |
|---|---|---|
local | All queues (regional + global) | Single-server / development |
us-east | playwright-global, k6-us-east, k6-global, monitor-us-east | US East worker |
eu-central | playwright-global, k6-eu-central, k6-global, monitor-eu-central | EU Central worker |
Routing rules: Playwright jobs go to playwright-global (any worker picks them up). K6 jobs target a single resolved location queue plus k6-global. Monitors run only on their configured location queues.
Managing Locations
Locations are managed in Super Admin β Locations.
| Field | Required | Description |
|---|---|---|
| Code | Yes | Unique identifier for queue names (e.g. us-east). Lowercase, 2β50 chars, letters/digits/hyphens. Reserved codes blocked. |
| Name | Yes | Display name (e.g. "US East") |
| Region | No | Geographic description (e.g. "Ashburn, Virginia") |
| Flag | No | Emoji flag for UI (e.g. πΊπΈ) |
| Coordinates | No | Lat/lng for map visualization |
| Default | No | Default location for K6 jobs. Only one at a time. |
| Enabled | No | Active status. Disabling removes queues. |
Status Indicators
| Status | Meaning |
|---|---|
| Active | Enabled, workers connected (heartbeat detected) |
| Offline | Enabled, no workers currently connected |
| Disabled | Toggled off β no queues created |
Offline locations stay visible in the UI to prevent transient outages from silently removing regions from monitor configs. Monitors degrade gracefully to the online subset. K6 jobs remain queued until a matching worker comes online.
Project Location Restrictions
Org admins can restrict which locations a project may use via Settings β Admin β Project Locations.
Setup
Configure Main Server
Update your main server's .env to set a specific region instead of local:
WORKER_LOCATION=eu-centralRestart using the same Compose file your main server already uses:
# Quick Start (HTTP)
docker compose down && \
KUBECONFIG_FILE=/etc/rancher/k3s/supercheck-worker.kubeconfig \
docker compose up -d
# Production (HTTPS)
docker compose -f docker-compose-secure.yml down && \
KUBECONFIG_FILE=/etc/rancher/k3s/supercheck-worker.kubeconfig \
docker compose -f docker-compose-secure.yml up -dExpose Services
Expose database ports for remote worker access in your Docker Compose file:
services:
postgres:
ports:
- "5432:5432"
redis:
ports:
- "6379:6379"
minio:
ports:
- "9000:9000"Security: Exposing database ports requires proper network security. Use VPN (WireGuard/Tailscale), firewall rules, or encrypted tunnels. See Security below.
Deploy Remote Workers
On each remote VPS:
1. Install Docker and execution sandbox:
Each remote worker requires a Linux server (Ubuntu 22.04+, Debian 12+) with Docker and the execution sandbox installed. macOS, Windows, and WSL2 are not supported.
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER && newgrp docker
# Execution sandbox (K3s + gVisor)
curl -fsSL -o setup-k3s.sh https://raw.githubusercontent.com/supercheck-io/supercheck/main/deploy/docker/setup-k3s.sh
sudo bash setup-k3s.sh2. Download worker compose:
mkdir -p ~/supercheck-worker && cd ~/supercheck-worker
curl -o docker-compose-worker.yml https://raw.githubusercontent.com/supercheck-io/supercheck/main/deploy/docker/docker-compose-worker.yml3. Create .env:
# Worker location (must match an enabled location code in Super Admin)
WORKER_LOCATION=us-east
# Database
DATABASE_URL=postgresql://postgres:YOUR_DB_PASSWORD@MAIN_SERVER_IP:5432/supercheck
# Redis
REDIS_HOST=MAIN_SERVER_IP
REDIS_PORT=6379
REDIS_PASSWORD=YOUR_REDIS_PASSWORD
# S3/MinIO
S3_ENDPOINT=http://MAIN_SERVER_IP:9000
AWS_ACCESS_KEY_ID=YOUR_MINIO_ACCESS_KEY
AWS_SECRET_ACCESS_KEY=YOUR_MINIO_SECRET_KEYFind credentials in your main server's .env at supercheck/deploy/docker/.env.
4. Start:
KUBECONFIG_FILE=/etc/rancher/k3s/supercheck-worker.kubeconfig docker compose -f docker-compose-worker.yml up -d
docker compose -f docker-compose-worker.yml logs -f # Verify connectionComplete 3-Region Example
| Server | Region | WORKER_LOCATION | Queues Processed |
|---|---|---|---|
| Main Server | Europe | eu-central | playwright-global, k6-eu-central, k6-global, monitor-eu-central |
| Remote VPS 1 | US | us-east | playwright-global, k6-us-east, k6-global, monitor-us-east |
| Remote VPS 2 | Asia | asia-pacific | playwright-global, k6-asia-pacific, k6-global, monitor-asia-pacific |
Main Server .env:
WORKER_LOCATION=eu-centralUS VPS .env:
# Worker location
WORKER_LOCATION=us-east
# Connection to main server (replace with your values)
DATABASE_URL=postgresql://postgres:YOUR_DB_PASSWORD@YOUR_MAIN_SERVER_IP:5432/supercheck
REDIS_HOST=YOUR_MAIN_SERVER_IP
REDIS_PORT=6379
REDIS_PASSWORD=YOUR_REDIS_PASSWORD
S3_ENDPOINT=http://YOUR_MAIN_SERVER_IP:9000
AWS_ACCESS_KEY_ID=YOUR_MINIO_ACCESS_KEY
AWS_SECRET_ACCESS_KEY=YOUR_MINIO_SECRET_KEYAPAC VPS .env:
# Worker location
WORKER_LOCATION=asia-pacific
# Connection to main server (replace with your values)
DATABASE_URL=postgresql://postgres:YOUR_DB_PASSWORD@YOUR_MAIN_SERVER_IP:5432/supercheck
REDIS_HOST=YOUR_MAIN_SERVER_IP
REDIS_PORT=6379
REDIS_PASSWORD=YOUR_REDIS_PASSWORD
S3_ENDPOINT=http://YOUR_MAIN_SERVER_IP:9000
AWS_ACCESS_KEY_ID=YOUR_MINIO_ACCESS_KEY
AWS_SECRET_ACCESS_KEY=YOUR_MINIO_SECRET_KEYScaling Workers
WORKER_REPLICAS: Controls the number of worker containers. Set individually on each worker server.RUNNING_CAPACITY: Maximum concurrent test runs in running state. Set on the app server equal to total worker replicas across all servers.QUEUED_CAPACITY: Maximum queued test runs before rejecting submissions. Set on the app server based on your desired queue length.
# Main server: total of 3 workers across all servers (1 + 1 + 1)
RUNNING_CAPACITY=3 QUEUED_CAPACITY=30 \
KUBECONFIG_FILE=/etc/rancher/k3s/supercheck-worker.kubeconfig \
docker compose -f docker-compose-secure.yml up -d
# Remote worker server: scale replicas on that server only
WORKER_REPLICAS=1 \
KUBECONFIG_FILE=/etc/rancher/k3s/supercheck-worker.kubeconfig \
docker compose -f docker-compose-worker.yml up -d| Server | Role | Setting |
|---|---|---|
| Main server (App + Worker) | App + 1 worker | RUNNING_CAPACITY=3 (total), WORKER_REPLICAS=1 |
| Remote VPS 1 | Worker only | WORKER_REPLICAS=1 (no RUNNING_CAPACITY needed) |
| Remote VPS 2 | Worker only | WORKER_REPLICAS=1 (no RUNNING_CAPACITY needed) |
Security
Important: Multi-location deployments require exposing database ports (PostgreSQL, Redis, MinIO) over the network. You are responsible for securing these connections using appropriate measures such as:
- VPN (WireGuard, Tailscale)
- Firewall rules (UFW, iptables, Cloud Security Groups)
- Encrypted tunnels
Detailed network security configuration is beyond the scope of this documentation. Please consult your infrastructure provider's security best practices.
Troubleshooting
| Issue | Diagnostic Command |
|---|---|
| Worker can't reach DB | docker run --rm postgres:18 psql "$DATABASE_URL" -c "SELECT 1" |
| Worker can't reach Redis | docker run --rm redis:8 redis-cli -h MAIN_SERVER_IP -a PASSWORD ping |
| Verify worker location | docker compose logs worker | grep WORKER_LOCATION |
| K6 jobs stuck queued | Ensure a worker has the matching WORKER_LOCATION running |
| "No monitor queues available" | Verify workers are running in the monitor's configured locations |