Quick Start
Get up and running with WebWatchr in minutes.
- Create an account at webwatchr.io/register
- Add a URL to monitor from your dashboard
- Choose monitoring mode: Visual (layout/images) or Text (content/prices)
- Set check frequency and you're ready to go
Authentication
WebWatchr supports multiple authentication methods:
Email & Password
Traditional registration with email verification. Passwords must be at least 12 characters with uppercase, lowercase, numbers, and symbols.
Social Login (OAuth)
Sign in quickly with existing accounts:
- Google - Sign in with your Google account
- Apple - Sign in with Apple ID
- GitHub - Sign in with GitHub
Managing Linked Accounts
- Go to Profile → Linked Accounts
- Click "Link" to connect a new provider
- Use "Unlink" to remove (requires at least one auth method)
Note: If you signed up with OAuth only, use "Forgot Password" to set a password.
Monitoring Modes
Visual Mode
Compares screenshots to detect visual changes. Best for:
- Design and layout changes
- Ad placement monitoring
- Image updates
- Styling modifications
Text Mode
Compares text content to detect wording changes. Best for:
- Price monitoring
- News and article updates
- Product availability
- Content changes
AI Mode
AI Mode uses a 3-gate relevance pipeline on top of change detection.
How It Works
Gate 1 performs a deterministic text diff and exits early on no-change. Gate 2 applies intent-keyword filtering with high-signal and large-diff bypasses. Gate 3 uses an LLM to judge relevance and must provide evidence quotes.
Intent Matching
Your intent is converted into keywords and a per-page noise profile at job creation (7-day TTL). These are used to reduce noise before LLM evaluation.
- Good: "Alert when price drops below $50"
- Good: "Alert when tickets become available"
- Good: "Alert when new Senior Engineer roles appear"
- Avoid: "Alert on changes"
Evidence and Dedup
AI notifications include quoted evidence from the detected diff. Duplicate evidence is hashed and suppressed to reduce repeat alerts.
Model Selection
Choose the AI model based on your needs:
- gpt-4o-mini (default) - Fast and cost effective, ideal for most use cases
- gpt-4o - Higher accuracy for complex reasoning
- gpt-4-turbo - Best for nuanced analysis
Credit Usage
AI Mode consumes credits per check:
- gpt-4o-mini: 1 credit per check
- gpt-4o: 5 credits per check
- gpt-4-turbo: 10 credits per check
AI Mode requires Pro or Premium tier.
Best Practices
- Use element selection to focus on relevant page sections
- Start with gpt-4o-mini, upgrade if needed
- Use longer intervals (30-60 min) to conserve credits
- Test your intent with the preview feature before enabling
Job Settings
Schedule Type
Choose how your job runs:
- Continuous - Checks at a fixed interval (5 min to 1 week)
- Specific Hours - Only runs during selected days and hours with timezone support
Use the schedule picker to select business hours, weekends, or custom time windows. You can edit the schedule type and configuration from the job edit panel.
Shorter intervals require a paid plan.
Threshold
- Any Change - Triggers on any pixel change
- Small Change - Very sensitive to minor differences
- Medium Change - Moderate sensitivity (recommended)
- Large Change - Only significant changes
Wait Time
Delay before capturing: 0, 3, or 6 seconds. Use longer times for slow-loading pages.
Disable JavaScript
Load pages without JS execution. Useful for static content or faster loading.
Actions
Actions execute before capturing the page. Maximum 12 per job.
Click
Click elements on the page
Type
Enter text into input fields
Delete Element
Remove elements from the page
Wait
Pause between actions
Cookie
Set browser cookies
Local Storage
Set localStorage values
Reload
Refresh the page
XPath Selectors
XPath selectors target specific elements on web pages.
Getting XPath from Browser
- Right-click element → Inspect
- In DevTools, right-click HTML → Copy → Copy XPath
Visual Selector
- Click the visual selector button when creating an action
- Clickable elements will be highlighted
- Click the element you want to target
- XPath is automatically generated
Testing XPath
Open browser console (F12) and run:
$x("//your-xpath-here")If it returns the element, your XPath works.
Common Patterns
//button[@id='submit']- By ID//a[contains(text(), 'Login')]- By text//input[@type='email']- By attribute//*[contains(@class, 'btn')]- By partial class//div[@data-testid='header']- By data attribute
Tips
- Use stable attributes like
idordata-* - Avoid dynamic IDs that change on each load
- Test in browser console before using
Import Jobs
Bulk create jobs via JSON file or paste.
How to Import
- Go to Dashboard → Import Jobs
- Upload a JSON file or paste JSON
- Review the preview
- Click Import
JSON Format
{
"jobs": [
{
"jobURL": "https://example.com",
"monitoringMode": "visual",
"actions": {
"interval": 30,
"threshold": 0.1,
"waitFor": 3,
"disableJS": false,
"actions": []
}
}
]
}With Actions
{
"jobs": [
{
"jobURL": "https://example.com/page",
"monitoringMode": "text",
"actions": {
"interval": 60,
"threshold": 0,
"waitFor": 3,
"actions": [
{ "type": "click", "xpath": "//button[@id='load']" },
{ "type": "wait", "duration": 3 },
{ "type": "delete", "xpath": "//div[@class='popup']" }
]
}
}
]
}Field Reference
Required fields:
jobURL- URL to monitor (http/https)monitoringMode- "visual", "text", or "ai"actions.interval- Check frequency in minutes: 5, 10, 15, 30, 60, 120, 180, 360, 720, 1440, 10080actions.threshold- Change sensitivity: 0, 0.05, 0.1, 0.2actions.waitFor- Wait before capture: 0, 3, 6 seconds
Optional fields:
actions.disableJS- Disable JavaScript (boolean)actions.actions- Pre-capture actions arrayaiIntent- Required for AI mode: what to watch for (max 200 chars)aiModel- AI mode model: "gpt-4o-mini" (default), "gpt-4o", "gpt-4-turbo"entirePage- Text mode only: monitor full page (boolean)textselector/elementselector- XPath for element preselectiontextFilterContains- Text mode: only detect changes containing this texttextFilterExcludes- Text mode: ignore changes containing this texttextSimilarityThreshold- Text mode: similarity threshold 0.85-0.99 (default 0.95)
AI Mode Example
Basic AI monitoring with intent:
{
"jobs": [
{
"jobURL": "https://example.com/schedule",
"monitoringMode": "ai",
"aiIntent": "Alert when John OR Mary works evening shift",
"actions": {
"interval": 60,
"threshold": 0,
"waitFor": 3
}
}
]
}With model selection (Pro/Premium):
{
"jobs": [
{
"jobURL": "https://example.com/pricing",
"monitoringMode": "ai",
"aiIntent": "Alert when price drops below $100",
"aiModel": "gpt-4o",
"actions": {
"interval": 30,
"threshold": 0,
"waitFor": 3
}
}
]
}Text Mode with Filters
{
"jobs": [
{
"jobURL": "https://status.example.com",
"monitoringMode": "text",
"entirePage": true,
"textFilterContains": "operational",
"textSimilarityThreshold": 0.90,
"actions": {
"interval": 5,
"threshold": 0,
"waitFor": 0
}
}
]
}Limits
- Max file size: 1MB
- Max jobs per import: 100
- Max actions per job: 12
AI Job Tester
Test AI monitoring mode against live URLs without creating jobs.
What It Does
The AI Job Tester lets you validate AI monitoring intent before creating actual jobs. Enter a URL and AI intent to see an immediate verdict with confidence scores, evidence, and AI reasoning.
How to Use
- Navigate to Admin Dashboard → AI Tester
- Enter the URL to test
- Enter your AI intent (e.g., "Notify when price drops below $50")
- Click Run Test
- Review the verdict: WOULD NOTIFY or WOULD NOT NOTIFY
- Optional: Receive results via email
Understanding Results
- Verdict - Whether the AI would send a notification
- Confidence Score - AI confidence in the decision
- Evidence - Quoted text from the page supporting the verdict
- Reasoning - AI explanation of how it reached the verdict
Benefits
- Validate AI intent before creating jobs
- Understand how AI interprets your intent
- Test different intents against the same URL
- Avoid wasting credits on misconfigured jobs
Access
AI Job Tester is available in the Admin Dashboard. Requires admin privileges.
Developer Tools
Advanced tools: webhooks, cron monitoring, and uptime checks.
Webhooks
Receive HTTP notifications when changes are detected.
- Configure webhook URLs in job settings
- HMAC signature verification for security
- View delivery history and test webhooks
{
"event": "job.change_detected",
"job_id": "abc123",
"timestamp": "2025-01-24T12:00:00Z",
"change": {
"type": "visual",
"difference_percentage": 5.2
}
}Verify the X-Webhook-Signature header:
const crypto = require('crypto');
const signature = req.headers['x-webhook-signature'];
const expected = crypto
.createHmac('sha256', webhookSecret)
.update(JSON.stringify(req.body))
.digest('hex');
if (signature === expected) {
// Valid request
}Cron Monitoring
Monitor scheduled tasks with ping-based heartbeats.
- Create monitors in Dashboard → Developer → Cron Jobs
- Send HTTP pings from your cron jobs
- Get alerts when jobs miss their schedule
- View analytics: success rate, duration percentiles, uptime
# Simple ping (job completed)
curl https://webwatchr.io/jobs/ping/{ping_key}
# Start ping (job started)
curl https://webwatchr.io/jobs/ping/{ping_key}/start
# Complete ping (job completed successfully)
curl https://webwatchr.io/jobs/ping/{ping_key}/complete
# Fail ping (job failed)
curl https://webwatchr.io/jobs/ping/{ping_key}/fail
# With duration and exit code
curl "https://webwatchr.io/jobs/ping/{ping_key}/complete?duration=1500&exitCode=0"Uptime Monitoring
Monitor website availability with HTTP checks and custom assertions.
- HTTP/HTTPS endpoint monitoring
- Configurable check intervals (1-60 minutes)
- Response time tracking and SLA reports
- Automatic incident detection and resolution
Assertions:
- status_code - Validate specific HTTP status codes
- latency_lt_ms - Response must be under X milliseconds
- json_path - Validate JSON response values
- header_present - Check for required response headers
- body_contains - Response must include specific text
{
"url": "https://api.example.com/health",
"assertions": [
{ "type": "status_code", "value": 200 },
{ "type": "latency_lt_ms", "value": 500 },
{ "type": "json_path", "path": "$.db", "value": "ok" },
{ "type": "body_not_contains", "value": "maintenance" }
]
}Topic Monitoring
Monitor topics across multiple web sources. Get notified when any source has updates relevant to your topic.
- Enter a search topic (e.g., "MacBook M5 release date")
- AI-powered source discovery finds relevant websites
- Select which sources to monitor
- Receive per-source notifications when content changes
- Blacklist sources you do not want to track
How it works:
- Discovery - Uses Tavily API to find relevant sources for your topic
- Monitoring - Periodically checks each source for content changes
- Notifications - Email digest with per-source summaries and excerpts
Credit usage: 1 credit per source per check. If you monitor 5 sources daily, that is 5 credits per day.
Availability: Pro and Premium tiers only.
Source limits by tier:
- Pro: Up to 10 topic monitors, 10-25 sources each
- Premium: Up to 30 topic monitors, 20-50 sources each
Topic Intelligence
Topic Intelligence builds on Topic Monitoring by accumulating knowledge over time, delivering smart briefing emails, and tracking individual findings across your sources.
Knowledge State
Each topic maintains a knowledge state that represents the accumulated understanding of your topic across all monitored sources.
- How it updates - Each check cycle analyzes source changes and merges new information into the existing knowledge state
- Summary - A plain-text overview of everything known about your topic so far
- Confidence level - Rated as Low, Moderate, or High based on source agreement and corroboration
- Contributing sources - Shows how many of your monitored sources have contributed to the current knowledge
The knowledge state is visible on your topic detail page and updates automatically after each check cycle.
Briefing Emails
Instead of generic "X sources changed" notifications, briefing emails provide contextual intelligence about your topic.
- Headline summary - A concise overview of what changed and why it matters
- New findings - Individual discoveries listed with their confidence levels
- Context - How new information relates to previously known facts
Briefing emails are sent only when relevant changes are detected, not on every check cycle. This reduces noise and keeps your inbox focused on meaningful updates.
Findings Timeline
The findings timeline is a chronological record of all facts discovered about your topic.
- Active findings - Currently valid discoveries that contribute to your knowledge state
- Superseded findings - Older findings that have been replaced by newer, more accurate information
- Filtering - Toggle between active findings only or all findings including superseded ones
Each finding displays its content, confidence level (Low, Moderate, or High), the source it came from, and the date it was discovered.
Source Health
Track the quality and reliability of each monitored source.
- Noise score - Sources that frequently change without providing relevant information are flagged as "Noisy" or "Very Noisy"
- Reliability - Sources with repeated connection failures or errors are flagged for your attention
- Relevance tracking - See how many relevant updates each source has contributed over time
Use source health indicators to prune low-quality sources and focus your monitoring budget on sources that consistently deliver useful information.
CLI (Config-as-Code)
Manage monitors as code. Perfect for GitOps and CI/CD pipelines.
Quick Start
# Install npm install -g @webwatchr/cli # Login (get API key from Dashboard → Developer → API Keys) webwatchr login
Auto-detect Cron Jobs
Automatically scan your crontab and set up monitoring:
# Create monitors matching your cron job names
cat > webwatchr.yaml << 'EOF'
cron_monitors:
- name: Daily Backup
schedule: "0 2 * * *"
grace_period: 300
EOF
# Sync monitors to WebWatchr
webwatchr sync webwatchr.yaml
# Auto-install ping commands into your crontab
webwatchr cron install --dry-run # Preview
webwatchr cron install # ApplyYAML Configuration
monitors:
- name: API Health Check
url: https://api.example.com/health
interval: 60
timeout: 5000
assertions:
- type: status_code
value: 200
- type: latency_lt_ms
value: 500
auth:
type: bearer
token_env: API_TOKEN # Uses $API_TOKEN
cron_monitors:
- name: Daily Backup Job
schedule: "0 2 * * *"
grace_period: 300Commands
# Validate config webwatchr validate monitors.yaml # Sync to WebWatchr webwatchr sync monitors.yaml --dry-run # Preview webwatchr sync monitors.yaml # Apply webwatchr sync monitors.yaml --prune # Remove unlisted # View status webwatchr status webwatchr status --type uptime webwatchr status --type cron webwatchr status --json
CI/CD Integration
name: Sync Monitors
on:
push:
branches: [main]
paths: ['monitors.yaml']
jobs:
sync:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm install -g @webwatchr/cli
- run: webwatchr validate monitors.yaml
- run: webwatchr sync monitors.yaml
env:
WEBWATCHR_API_KEY: ${{ secrets.WEBWATCHR_API_KEY }}Testing
WebWatchr maintains comprehensive test coverage for reliability and security.
Running Tests
# Service-specific tests npm test --prefix cli npm test --prefix shared npm test --prefix user-billing-service
Test Categories
- E2E Billing Tests - Validates Stripe integration with mocked payment flows
- Security Tests - Credential log verification and shell injection prevention
- Unit Tests - Business logic validation per service
Coverage Targets
- Critical paths - 100% (auth, billing, data access)
- Business logic - 90%+
Troubleshooting
Job Not Running
- Check job is active (not paused)
- Verify URL is accessible
- Check for JavaScript errors if disableJS is false
No Changes Detected
- Lower threshold for higher sensitivity
- Increase wait time for slow-loading pages
- Use visual mode for layout changes
XPath Not Working
- Test in browser console with
$x() - Check for dynamic IDs that change
- Use data-* attributes or text content instead
Actions Failing
- Add Wait action before interactions
- Verify element exists when action runs
- Check XPath targets the correct element
Need Help?
Contact us for support.
