Supercheck LogoSupercheck

Alerts

Multi-channel notifications for failures and recoveriesEdit

Alerts notify you when monitors fail, tests break, or services recover. Configure multiple notification channels and use smart thresholds to avoid alert fatigue while ensuring you never miss critical issues.

Notification Providers

Supercheck supports five notification channels. You can configure multiple providers and attach different ones to different monitors and jobs.

Email

Professional HTML emails via SMTP with detailed status information and direct links to dashboards

Slack

Rich formatted messages to channels or DMs with status badges, fields, and action links

Webhook

JSON payloads to any HTTP endpoint for custom integrations and automation

Telegram

Bot messages to chats or groups with Markdown formatting

Discord

Embedded messages to server channels with color-coded status

Setting Up Providers

Add a Notification Provider

  1. Go to Alerts → Notification Channels
  2. Click Add Provider
  3. Select the provider type
  4. Enter the required credentials
  5. Click Test to verify the connection
  6. Save the provider

Notification Channels

Attach Providers to Monitors/Jobs

After creating providers, attach them to monitors or jobs:

  1. Edit a monitor or job
  2. Go to the Alerts section
  3. Select which providers should receive notifications
  4. Configure alert types (failure, recovery, etc.)
  5. Save

Alert Types

Monitor Alerts

Alert TypeTriggerColor
Monitor FailureConsecutive failures exceed thresholdRed
Monitor RecoveryConsecutive successes after being downGreen
SSL ExpiringCertificate expires within configured daysYellow

Job Alerts

Alert TypeTriggerColor
Job FailureOne or more tests in the job failedRed
Job SuccessAll tests passed (optional)Green
Job TimeoutJob exceeded maximum execution timeRed

Threshold Logic

Thresholds prevent alert noise from transient issues like network blips or brief service hiccups.

How Thresholds Work

Failure Threshold: Number of consecutive failures before sending an alert.

Recovery Threshold: Number of consecutive successes before sending a recovery alert.

Example: Failure Threshold = 3

Check #ResultFailure CountAlert Sent?
1❌ Fail1/3No
2❌ Fail2/3No
3❌ Fail3/3Yes - Failure Alert
4❌ Fail4 (already alerted)No (max 3 alerts per sequence)
5✅ PassReset to 0No (checking recovery threshold)
6✅ Pass2/2 recoveryYes - Recovery Alert

Alert Limiting

To prevent notification spam during extended outages:

  • Maximum 3 failure alerts per failure sequence
  • Maximum 3 recovery alerts per recovery sequence
  • Counter resets when status changes

Recommended thresholds:

  • Failure: 2-3 for most services (balances speed vs. false positives)
  • Recovery: 2 to confirm the service is truly stable
  • Use threshold of 1 only for extremely critical services

Provider Configuration

Email (SMTP)

Email alerts use professional HTML templates with your branding.

Required environment variables:

SMTP_HOST=smtp.resend.com
SMTP_PORT=587
SMTP_USER=resend
SMTP_PASSWORD=your-api-key
SMTP_FROM_EMAIL=alerts@yourdomain.com

Email content includes:

  • Alert type and severity (color-coded header)
  • Monitor/job name and project
  • Status, response time, and error details
  • Direct link to dashboard
  • Timestamp and location

Supported SMTP providers:

  • Resend, SendGrid, Mailgun, Amazon SES
  • Any SMTP server (Gmail, Office 365, self-hosted)

Slack

Send rich formatted messages to Slack channels or direct messages.

Setup:

  1. Go to Slack API and create an Incoming Webhook
  2. Select the channel to post to
  3. Copy the webhook URL (starts with https://hooks.slack.com/)
  4. Add the URL in Supercheck

Message format:

  • Color-coded sidebar (red for failure, green for recovery)
  • Title with alert type
  • Fields for status, response time, location
  • Footer with timestamp
  • Link to dashboard

Discord

Send embedded messages to Discord server channels.

Setup:

  1. In Discord, go to Server Settings → Integrations → Webhooks
  2. Create a new webhook and select the channel
  3. Copy the webhook URL
  4. Add the URL in Supercheck

Message format:

  • Embedded message with color-coded border
  • Title and description
  • Fields for key information
  • Timestamp in footer

Telegram

Send messages to Telegram chats or groups via bot.

Setup:

  1. Create a bot with @BotFather
  2. Get your bot token
  3. Get your chat ID (use @userinfobot)
  4. Add both in Supercheck

Message format:

  • Markdown formatted text
  • Bold titles for fields
  • Emoji indicators for status

Custom Webhook

Send JSON payloads to any HTTP endpoint for custom integrations.

Configuration:

  • URL: Your endpoint (must accept POST requests)
  • Method: POST, PUT, or PATCH
  • Headers: Optional custom headers (e.g., authentication)

Payload structure:

{
  "title": "Monitor Down: API Health",
  "message": "Monitor failed after 3 consecutive failures",
  "fields": [
    { "name": "Status", "value": "Down" },
    { "name": "Response Time", "value": "5.2s" },
    { "name": "Location", "value": "us-east" },
    { "name": "Error", "value": "Connection timeout" }
  ],
  "color": "#dc2626",
  "footer": "sent by supercheck",
  "timestamp": 1705312200,
  "originalPayload": {
    "monitorId": "...",
    "type": "monitor_failure",
    "severity": "error"
  },
  "provider": "webhook",
  "version": "1.0"
}

Use cases:

  • PagerDuty, Opsgenie, or other incident management
  • Custom dashboards and monitoring systems
  • Automation workflows (Zapier, n8n, Make)
  • Internal alerting systems

Alert History

Track all alerts sent from your organization:

Alert History

History includes:

  • Alert type and severity
  • Target (monitor or job name)
  • Notification channels used
  • Delivery status (sent, failed)
  • Timestamp

Use alert history to:

  • Verify alerts are being delivered
  • Debug notification issues
  • Review incident timelines
  • Audit alert patterns

Alert Samples

Email Alerts

Professional HTML emails with detailed information:

Job Failure Email:

Job Failure Email

Monitor Alert Email:

Monitor Alert Email

Slack Alerts

Rich formatted messages with status badges:

Slack Alert

Discord Alerts

Embedded messages with color-coded status:

Discord Alert

Best Practices

Channel Strategy

  • Use multiple channels for critical services (Slack + Email)
  • Separate channels by severity — Critical to PagerDuty, warnings to Slack
  • Team-specific channels — Route alerts to relevant teams

Threshold Configuration

  • Start with threshold 2-3 and adjust based on false positive rate
  • Use higher thresholds for flaky services or high-latency endpoints
  • Use threshold 1 only for services where any failure is critical

Alert Hygiene

  • Test alerts before going live — Use the test button when configuring
  • Review alert history regularly — Look for patterns and noise
  • Update thresholds as services stabilize or change
  • Document escalation paths — Who responds to which alerts

Avoiding Alert Fatigue

  • Don't alert on everything — Focus on actionable issues
  • Use recovery alerts sparingly — Only for services where recovery confirmation matters
  • Group related monitors — One alert for a service, not one per endpoint
  • Set appropriate check intervals — More frequent checks = more potential alerts