Single-Location
Docker Compose deployment on a single serverEdit
Deploy Supercheck on a single server with Docker Compose.
Architecture
All services run on a single Linux server managed by Docker Compose. Workers consume jobs from Redis via BullMQ and execute each test as an ephemeral Kubernetes Job in a local K3s cluster, sandboxed with gVisor for kernel-level isolation. Scale by increasing WORKER_REPLICAS or expanding to multiple regions.
Docker Compose
Install Docker
A Linux server is required (Ubuntu 22.04+, Debian 12+). Supercheck uses K3s and gVisor for sandboxed test execution, which require the Linux kernel. macOS, Windows, and WSL2 are not supported.
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
newgrp dockerClone and Configure
git clone https://github.com/supercheck-io/supercheck.git
cd supercheck/deploy/docker
sudo bash init-secrets.shEdit .env for optional integrations (SMTP, AI, OAuth).
Set Up Execution Sandbox
sudo bash setup-k3s.shThis installs K3s with gVisor for sandboxed Playwright and K6 execution.
Deploy
KUBECONFIG_FILE=/etc/rancher/k3s/supercheck-worker.kubeconfig docker compose up -dAccess
Open http://localhost:3000 and create your account.
# Optional: grant super admin
docker compose exec app npm run setup:admin your-email@example.comInstall Docker
A Linux server is required (Ubuntu 22.04+, Debian 12+). Supercheck uses K3s and gVisor for sandboxed test execution, which require the Linux kernel. macOS, Windows, and WSL2 are not supported.
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
newgrp dockerConfigure DNS
| Type | Name | Value |
|---|---|---|
| A | app | Your Server IP |
| A | * | Your Server IP |
The wildcard (*) record enables default status page URLs like abc123.example.com.
The same wildcard usually also covers the derived custom-domain target (for example, cname.example.com). If the target shown in Settings is outside that wildcard or on another zone, add an A/AAAA record for that target as well.
Customer-facing custom domains should stay outside the STATUS_PAGE_DOMAIN namespace. For example, you might reserve example.com for default UUID URLs and serve a status page at status.example.net by pointing its CNAME to cname.example.com.
Cloudflare Users: Set SSL/TLS mode to "Full" or "Full (Strict)".
Clone and Configure
git clone https://github.com/supercheck-io/supercheck.git
cd supercheck/deploy/docker
sudo bash init-secrets.shEdit .env:
APP_DOMAIN=app.example.com
ACME_EMAIL=admin@example.com
STATUS_PAGE_DOMAIN=example.comSTATUS_PAGE_DOMAIN reserves the default status page namespace ([uuid].STATUS_PAGE_DOMAIN). In the HTTPS Compose examples, Supercheck derives the custom-domain target from it automatically, usually as cname.STATUS_PAGE_DOMAIN.
Set STATUS_PAGE_DOMAIN to a publicly reachable hostname. Keep customer-facing custom domains outside that reserved namespace, and keep those hostnames as CNAME records only. The target shown in Settings must already point to your app, usually via the wildcard * record above or a dedicated A/AAAA record for that target.
Local development still stays on http://localhost:3000/status/[subdomain]. STATUS_PAGE_DOMAIN is only used for public/default status-page hostnames once you access the app through that public hostname.
Set Up Execution Sandbox
sudo bash setup-k3s.shThis installs K3s with gVisor for sandboxed Playwright and K6 execution.
Deploy
KUBECONFIG_FILE=/etc/rancher/k3s/supercheck-worker.kubeconfig docker compose -f docker-compose-secure.yml up -dAccess
Open https://app.example.com and create your account.
# Optional: grant super admin
docker compose -f docker-compose-secure.yml exec app npm run setup:admin your-email@example.comOptional Configuration
Operations
Scaling
WORKER_LOCATION=local (default) processes all queues on a single server. The UI auto-hides location selectors when only one location exists. Expand later via Multi-Location Workers.
# Quick Start (HTTP):
WORKER_REPLICAS=2 RUNNING_CAPACITY=2 QUEUED_CAPACITY=20 \
KUBECONFIG_FILE=/etc/rancher/k3s/supercheck-worker.kubeconfig \
docker compose up -d
# Production (HTTPS):
WORKER_REPLICAS=2 RUNNING_CAPACITY=2 QUEUED_CAPACITY=20 \
KUBECONFIG_FILE=/etc/rancher/k3s/supercheck-worker.kubeconfig \
docker compose -f docker-compose-secure.yml up -d| Size | Workers | Capacity | Server |
|---|---|---|---|
| Small | 1 | 1 | 2 vCPU / 4GB |
| Medium | 2 | 2 | 4 vCPU / 8GB |
| Large | 4 | 4 | 8 vCPU / 16GB |
RUNNING_CAPACITY and QUEUED_CAPACITY are App-side gates set on the app service only — do not set them on worker services. WORKER_REPLICAS controls the number of worker containers. Keep RUNNING_CAPACITY equal to the total number of worker replicas.
Backups
docker compose exec postgres pg_dump -U postgres supercheck > backup.sql # Create
docker compose exec -T postgres psql -U postgres supercheck < backup.sql # RestoreUpdates
# Quick Start (HTTP):
docker compose pull && \
KUBECONFIG_FILE=/etc/rancher/k3s/supercheck-worker.kubeconfig \
docker compose up -d
# Production (HTTPS):
docker compose -f docker-compose-secure.yml pull && \
KUBECONFIG_FILE=/etc/rancher/k3s/supercheck-worker.kubeconfig \
docker compose -f docker-compose-secure.yml up -dUpgrading from pre-1.3.3 releases — Supercheck moved from Docker socket-based test execution to a sandboxed K3s + gVisor model in 1.3.3. Before upgrading an older deployment, you must run the setup script:
# 1. Back up your database first
docker compose exec postgres pg_dump -U postgres supercheck > backup-pre-k3s-migration.sql
# 2. Install K3s + gVisor execution sandbox
sudo bash setup-k3s.sh
# 3. Pull new images and restart with kubeconfig
# Quick Start (HTTP):
docker compose pull && \
KUBECONFIG_FILE=/etc/rancher/k3s/supercheck-worker.kubeconfig \
docker compose up -d
# Production (HTTPS):
docker compose -f docker-compose-secure.yml pull && \
KUBECONFIG_FILE=/etc/rancher/k3s/supercheck-worker.kubeconfig \
docker compose -f docker-compose-secure.yml up -dThe worker container no longer requires the Docker socket. Existing tests and monitors continue to work without modification. If you have remote workers (Multi-Location), run setup-k3s.sh on each remote server as well.