Alerts
Multi-channel notifications for failures and recoveriesEdit
Alerts notify you when monitors fail, tests break, or services recover. Configure multiple notification channels and use smart thresholds to avoid alert fatigue while ensuring you never miss critical issues.
Notification Providers
Supercheck supports five notification channels. You can configure multiple providers and attach different ones to different monitors and jobs.
Professional HTML emails via SMTP with detailed status information and direct links to dashboards
Slack
Rich formatted messages to channels or DMs with status badges, fields, and action links
Webhook
JSON payloads to any HTTP endpoint for custom integrations and automation
Telegram
Bot messages to chats or groups with Markdown formatting
Discord
Embedded messages to server channels with color-coded status
Setting Up Providers
Add a Notification Provider
- Go to Alerts → Notification Channels
- Click Add Provider
- Select the provider type
- Enter the required credentials
- Click Test to verify the connection
- Save the provider

Attach Providers to Monitors/Jobs
After creating providers, attach them to monitors or jobs:
- Edit a monitor or job
- Go to the Alerts section
- Select which providers should receive notifications
- Configure alert types (failure, recovery, etc.)
- Save
Alert Types
Monitor Alerts
| Alert Type | Trigger | Color |
|---|---|---|
| Monitor Failure | Consecutive failures exceed threshold | Red |
| Monitor Recovery | Consecutive successes after being down | Green |
| SSL Expiring | Certificate expires within configured days | Yellow |
Job Alerts
| Alert Type | Trigger | Color |
|---|---|---|
| Job Failure | One or more tests in the job failed | Red |
| Job Success | All tests passed (optional) | Green |
| Job Timeout | Job exceeded maximum execution time | Red |
Threshold Logic
Thresholds prevent alert noise from transient issues like network blips or brief service hiccups.
How Thresholds Work
Failure Threshold: Number of consecutive failures before sending an alert.
Recovery Threshold: Number of consecutive successes before sending a recovery alert.
Example: Failure Threshold = 3
| Check # | Result | Failure Count | Alert Sent? |
|---|---|---|---|
| 1 | ❌ Fail | 1/3 | No |
| 2 | ❌ Fail | 2/3 | No |
| 3 | ❌ Fail | 3/3 | Yes - Failure Alert |
| 4 | ❌ Fail | 4 (already alerted) | No (max 3 alerts per sequence) |
| 5 | ✅ Pass | Reset to 0 | No (checking recovery threshold) |
| 6 | ✅ Pass | 2/2 recovery | Yes - Recovery Alert |
Alert Limiting
To prevent notification spam during extended outages:
- Maximum 3 failure alerts per failure sequence
- Maximum 3 recovery alerts per recovery sequence
- Counter resets when status changes
Recommended thresholds:
- Failure: 2-3 for most services (balances speed vs. false positives)
- Recovery: 2 to confirm the service is truly stable
- Use threshold of 1 only for extremely critical services
Provider Configuration
Email (SMTP)
Email alerts use professional HTML templates with your branding.
Required environment variables:
SMTP_HOST=smtp.resend.com
SMTP_PORT=587
SMTP_USER=resend
SMTP_PASSWORD=your-api-key
SMTP_FROM_EMAIL=alerts@yourdomain.comEmail content includes:
- Alert type and severity (color-coded header)
- Monitor/job name and project
- Status, response time, and error details
- Direct link to dashboard
- Timestamp and location
Supported SMTP providers:
- Resend, SendGrid, Mailgun, Amazon SES
- Any SMTP server (Gmail, Office 365, self-hosted)
Slack
Send rich formatted messages to Slack channels or direct messages.
Setup:
- Go to Slack API and create an Incoming Webhook
- Select the channel to post to
- Copy the webhook URL (starts with
https://hooks.slack.com/) - Add the URL in Supercheck
Message format:
- Color-coded sidebar (red for failure, green for recovery)
- Title with alert type
- Fields for status, response time, location
- Footer with timestamp
- Link to dashboard
Discord
Send embedded messages to Discord server channels.
Setup:
- In Discord, go to Server Settings → Integrations → Webhooks
- Create a new webhook and select the channel
- Copy the webhook URL
- Add the URL in Supercheck
Message format:
- Embedded message with color-coded border
- Title and description
- Fields for key information
- Timestamp in footer
Telegram
Send messages to Telegram chats or groups via bot.
Setup:
- Create a bot with @BotFather
- Get your bot token
- Get your chat ID (use @userinfobot)
- Add both in Supercheck
Message format:
- Markdown formatted text
- Bold titles for fields
- Emoji indicators for status
Custom Webhook
Send JSON payloads to any HTTP endpoint for custom integrations.
Configuration:
- URL: Your endpoint (must accept POST requests)
- Method: POST, PUT, or PATCH
- Headers: Optional custom headers (e.g., authentication)
Payload structure:
{
"title": "Monitor Down: API Health",
"message": "Monitor failed after 3 consecutive failures",
"fields": [
{ "name": "Status", "value": "Down" },
{ "name": "Response Time", "value": "5.2s" },
{ "name": "Location", "value": "us-east" },
{ "name": "Error", "value": "Connection timeout" }
],
"color": "#dc2626",
"footer": "sent by supercheck",
"timestamp": 1705312200,
"originalPayload": {
"monitorId": "...",
"type": "monitor_failure",
"severity": "error"
},
"provider": "webhook",
"version": "1.0"
}Use cases:
- PagerDuty, Opsgenie, or other incident management
- Custom dashboards and monitoring systems
- Automation workflows (Zapier, n8n, Make)
- Internal alerting systems
Alert History
Track all alerts sent from your organization:

History includes:
- Alert type and severity
- Target (monitor or job name)
- Notification channels used
- Delivery status (sent, failed)
- Timestamp
Use alert history to:
- Verify alerts are being delivered
- Debug notification issues
- Review incident timelines
- Audit alert patterns
Alert Samples
Email Alerts
Professional HTML emails with detailed information:
Job Failure Email:

Monitor Alert Email:

Slack Alerts
Rich formatted messages with status badges:

Discord Alerts
Embedded messages with color-coded status:

Best Practices
Channel Strategy
- Use multiple channels for critical services (Slack + Email)
- Separate channels by severity — Critical to PagerDuty, warnings to Slack
- Team-specific channels — Route alerts to relevant teams
Threshold Configuration
- Start with threshold 2-3 and adjust based on false positive rate
- Use higher thresholds for flaky services or high-latency endpoints
- Use threshold 1 only for services where any failure is critical
Alert Hygiene
- Test alerts before going live — Use the test button when configuring
- Review alert history regularly — Look for patterns and noise
- Update thresholds as services stabilize or change
- Document escalation paths — Who responds to which alerts
Avoiding Alert Fatigue
- Don't alert on everything — Focus on actionable issues
- Use recovery alerts sparingly — Only for services where recovery confirmation matters
- Group related monitors — One alert for a service, not one per endpoint
- Set appropriate check intervals — More frequent checks = more potential alerts