rtrvr.ai gives you two complementary systems for event-driven automation. Browser Triggers watch your open tabs for push notifications and fire workflows when a match arrives — no API required. Cloud Webhooks accept HTTP POSTs from external services and can send results back when workflows complete. Use them separately or combine them for end-to-end pipelines.
n8n + Extension Integration
Cloud Browser API from n8n
| Browser Triggers | Cloud Webhooks | |
|---|---|---|
| Signal source | Push notifications in browser tabs | HTTP POST from external services |
| Requires API from site? | No — any site that sends notifications | Yes — site must support outbound webhooks |
| Runs where? | Local Chrome browser | Cloud or local browser |
| Setup | Point-and-click in extension | Configure HTTP endpoint + auth |
| Best for | Social media, chat apps, SaaS dashboards | Zapier / Make / n8n, CI/CD, server-to-server |
| Always on? | While Chrome is open with monitored tabs | 24/7 via cloud |
Browser Triggers
Most websites don't expose webhooks or APIs for real-time events. Browser Triggers solve this by monitoring your open tabs for web push notifications and automatically executing workflows when a matching notification arrives. Think of them as Zapier-style triggers — powered by your browser instead of an API.
How Browser Triggers Work
The extension injects a lightweight interceptor into monitored tabs that patches the browser's native Notification API and ServiceWorkerRegistration.showNotification. When the page creates a push notification, the interceptor captures its title, body, and metadata — then checks it against your trigger conditions. If it matches, your configured workflow runs automatically. The original notification still displays normally.
- Interceptor patches window.Notification constructor and ServiceWorkerRegistration.showNotification
- Also listens for Service Worker → page messages that carry notification data
- Notification content is checked locally against your filters — nothing leaves your machine until a match fires
- Matching triggers execute the linked workflow in a new tab (same as scheduled workflows)
- Full execution history is stored: trigger name, matched notification, status, credits, results
Creating a Browser Trigger
- Run a workflow successfully in the side panel chat
- Click the "Trigger" button on the agent's response
- Enter the URL to monitor (e.g., twitter.com, app.slack.com)
- Add optional filters: title contains, body contains, or both
- Set a cooldown period to prevent rapid re-firing
- Save — the trigger is now active and monitoring
Notification Matching
Triggers match on two dimensions: the tab's hostname (must match the configured URL) and optional text filters on the notification title and/or body. All checks are case-insensitive and use simple "contains" matching.
| Filter | Example | Matches |
|---|---|---|
| Title contains | "new message" | Any notification with "new message" in the title |
| Body contains | "price drop" | Any notification with "price drop" in the body |
| Both | Title: "alert", Body: "urgent" | Only when both conditions are met |
| None (empty) | — | All notifications from the monitored site |
Cooldown & Rate Limiting
Each trigger has a configurable cooldown. After firing, it won't fire again until the cooldown expires — even if more matching notifications arrive. This prevents notification storms from burning through credits.
| Site Type | Recommended Cooldown |
|---|---|
| Chat apps (Slack, Discord) | 5–15 minutes |
| Social media (Twitter/X, LinkedIn) | 15–30 minutes |
| Price / stock alerts | 30–60 minutes |
| Daily digest triggers | 1440 minutes (24 hours) |
Monitored Tab Groups
When triggers are active, monitored tabs are automatically grouped into a yellow "🔔 Monitored" tab group for visual identification. The group updates automatically as you open/close tabs matching trigger URLs.
- Enable a trigger → matching open tabs are auto-grouped
- Yellow tab group indicates active monitoring
- Close a tab → monitoring pauses for that tab
- Reopen the URL → monitoring resumes automatically with interceptor re-injection
Examples
── Twitter DM → Google Sheet ───────────────────────────────
Trigger: URL: x.com | Title filter: "direct message" | Cooldown: 15m
Workflow: "Go to twitter.com/messages, extract latest DM sender and text,
append as new row to DM Tracker Google Sheet"
── Slack Alert → Competitive Analysis ─────────────────────
Trigger: URL: app.slack.com | Body filter: "competitor launched" | Cooldown: 60m
Workflow: "Search Google News for the competitor name, extract top 5 headlines,
post summary to #competitive-intel"
── Amazon Price Drop → WhatsApp Alert ─────────────────────
Trigger: URL: amazon.com | Title filter: "price drop" | Cooldown: 30m
Workflow: "Go to my Amazon wishlist, extract items with reduced prices,
compare with budget sheet, send summary via WhatsApp"
── GitHub PR Review → Auto-Test ────────────────────────────
Trigger: URL: github.com | Body filter: "requested your review" | Cooldown: 5m
Workflow: "Open the PR link, extract the file changes summary,
log to my Code Reviews Google Sheet with timestamp"Manual Testing
Test your trigger without waiting for a real notification. Open the Triggers dropdown in the side panel, find the trigger, and click "Test". The workflow executes immediately as if a notification matched — useful for verifying configuration before going live.
Limitations
- Monitored tabs must stay open — the interceptor needs an active page context
- Notifications created entirely within a Service Worker's push handler (without showNotification from the page context) may not be interceptable
- The extension patches both the constructor and SW registration methods, covering the majority of real-world sites (Twitter, Slack, YouTube, Gmail, etc.)
- Browser Triggers require Chrome to be running — for 24/7 coverage, use Cloud Webhooks or combine with Cloud Scheduling
Platform Compatibility
rtrvr.ai's /agent and /scrape endpoints return results directly in the HTTP response — no session management or async polling required. One POST, one response. For tasks over 60 seconds, add a webhookUrl for async delivery instead.
| Platform | HTTP Timeout | Compatibility | Recommended Approach |
|---|---|---|---|
| n8n | 100s (Cloud) | ✅ Excellent | Direct call works for all tasks |
| Make.com | Up to 300s | ✅ Excellent | Set timeout to 120s in advanced settings |
| Zapier | 30s (fixed) | ⚠️ Use webhooks | Add webhookUrl for async results |
Inbound Webhooks (Zapier, Make, n8n → rtrvr)
Any service that can send an HTTP POST can trigger rtrvr.ai workflows. Use the MCP endpoint to control your logged-in browser, or the /agent endpoint for cloud browser execution.
MCP Endpoint (Your Browser)
POST https://mcp.rtrvr.ai
Headers:
Authorization: Bearer rtrvr_your_api_key
Content-Type: application/json
Body:
{
"tool": "planner" | "extract" | "act" | "crawl" | "replay_workflow" | ...,
"params": { ... tool-specific parameters ... },
"deviceId": "optional_device_id",
"webhookUrl": "https://your-server.com/callback" // optional: receive results
}Agent Endpoint (Cloud Browser)
POST https://api.rtrvr.ai/agent
Headers:
Authorization: Bearer YOUR_API_KEY
Content-Type: application/json
Body:
{
"input": "Extract company info and contact details",
"urls": ["https://example.com"],
"webhookUrl": "https://your-server.com/callback",
"response": { "verbosity": "final" }
}n8n Integration
n8n Cloud has a 100-second timeout — comfortably above most rtrvr task durations. Use either endpoint depending on whether you need your logged-in browser or a cloud browser.
// n8n → MCP (your logged-in browser)
{
"method": "POST",
"url": "https://mcp.rtrvr.ai",
"body": {
"tool": "planner",
"params": {
"user_input": "{{ $json.task_description }}",
"tab_urls": ["{{ $json.target_url }}"]
},
"webhookUrl": "{{ $node.Webhook.url }}"
}
}
// n8n → /agent (cloud browser)
{
"method": "POST",
"url": "https://api.rtrvr.ai/agent",
"sendHeaders": true,
"headerParameters": {
"parameters": [
{ "name": "Authorization", "value": "Bearer {{ $credentials.rtrvrApiKey }}" },
{ "name": "Content-Type", "value": "application/json" }
]
},
"sendBody": true,
"bodyParameters": {
"parameters": [
{ "name": "input", "value": "={{ $json.taskDescription }}" },
{ "name": "urls", "value": "={{ [$json.targetUrl] }}" }
]
}
}Make (Integromat) Integration
- Add an "HTTP > Make a request" module
- URL: https://mcp.rtrvr.ai | Method: POST
- Headers: Authorization (Bearer token), Content-Type (application/json)
- Body type: Raw, Content type: JSON
- Request content: Your tool + params JSON
Zapier Integration
Zapier has a fixed 30-second HTTP timeout. Use the webhook pattern: Zap 1 triggers rtrvr with a webhookUrl pointing to a Catch Hook in Zap 2, which receives and processes the results.
- Add a "Webhooks by Zapier" action to your Zap
- Select "POST" as the method
- Set URL to: https://mcp.rtrvr.ai
- Add headers: Authorization = Bearer rtrvr_your_api_key, Content-Type = application/json
- Set Data to your JSON payload (tool + params)
- Set webhookUrl to a Catch Hook URL in a second Zap
// Example: Extract data when a new row is added to Google Sheets
{
"tool": "extract",
"params": {
"user_input": "Extract the company name, employee count, and funding info",
"tab_urls": ["{{Google Sheets Row URL}}"]
},
"webhookUrl": "https://hooks.zapier.com/hooks/catch/123/abc/"
}Available Tools
| Tool | Use Case | Execution |
|---|---|---|
| planner | Complex multi-step tasks from natural language | Local or cloud browser |
| extract | Structured data extraction with optional schema | Local or cloud browser |
| act | Page interactions (click, type, navigate) | Local or cloud browser |
| crawl | Multi-page crawling with extraction | Local or cloud browser |
| replay_workflow | Re-run a previous workflow by ID or URL | Local or cloud browser |
| get_browser_tabs | List open tabs | Local browser only |
| execute_javascript | Run JS in browser sandbox | Local browser only |
Outbound Webhooks (rtrvr → Your Server)
Include a webhookUrl in any API request to receive results when the workflow completes. rtrvr.ai will POST the full response to your endpoint.
Enabling Outbound Webhooks
curl -X POST "https://mcp.rtrvr.ai" \
-H "Authorization: Bearer rtrvr_xxx" \
-H "Content-Type: application/json" \
-d '{
"tool": "planner",
"params": {
"user_input": "Find pricing for iPhone 16 Pro on Apple.com",
"tab_urls": ["https://apple.com"]
},
"webhookUrl": "https://your-server.com/rtrvr-callback",
"webhookSecret": "your_hmac_secret"
}'Webhook Payload Format
{
"event": "workflow.completed",
"timestamp": "2025-01-15T12:00:00.000Z",
"requestId": "req_abc123xyz",
"success": true,
"data": {
"taskCompleted": true,
"output": { ... },
"extractedData": [ ... ],
"creditsUsed": 5
},
"metadata": {
"tool": "planner",
"deviceId": "dj75mmaTWP0",
"executionTime": 15234,
"creditsRemaining": 9995
},
"originalRequest": {
"tool": "planner",
"params": { ... }
}
}Error Payloads
{
"event": "workflow.failed",
"timestamp": "2025-01-15T12:00:00.000Z",
"requestId": "req_abc123xyz",
"success": false,
"error": {
"message": "Device offline: no available browser extensions",
"code": "DEVICE_UNAVAILABLE",
"details": { ... }
},
"metadata": { "tool": "planner", "executionTime": 1234 }
}Verifying Webhook Signatures
If you provide a webhookSecret, rtrvr.ai signs the payload with HMAC-SHA256. Verify it to ensure authenticity:
// Express.js
import crypto from 'crypto';
app.post('/rtrvr-callback', express.raw({ type: 'application/json' }), (req, res) => {
const signature = req.headers['x-rtrvr-signature'] as string;
const timestamp = req.headers['x-rtrvr-timestamp'] as string;
// Reject stale timestamps (> 5 minutes)
if (Date.now() - parseInt(timestamp) > 300000) {
return res.status(400).json({ error: 'Timestamp too old' });
}
const payload = timestamp + '.' + req.body.toString();
const expected = crypto
.createHmac('sha256', process.env.RTRVR_WEBHOOK_SECRET!)
.update(payload)
.digest('hex');
if (!crypto.timingSafeEqual(Buffer.from(signature), Buffer.from(expected))) {
return res.status(401).json({ error: 'Invalid signature' });
}
const data = JSON.parse(req.body.toString());
// Process asynchronously — respond 200 immediately
res.status(200).json({ received: true });
processWebhook(data);
});# Flask
import hmac, hashlib, time, os
from flask import Flask, request, jsonify
WEBHOOK_SECRET = os.environ['RTRVR_WEBHOOK_SECRET']
@app.route('/rtrvr-callback', methods=['POST'])
def handle_webhook():
signature = request.headers.get('X-Rtrvr-Signature')
timestamp = request.headers.get('X-Rtrvr-Timestamp')
if abs(time.time() * 1000 - int(timestamp)) > 300000:
return jsonify({'error': 'Timestamp too old'}), 400
payload = f"{timestamp}.{request.data.decode()}"
expected = hmac.new(WEBHOOK_SECRET.encode(), payload.encode(), hashlib.sha256).hexdigest()
if not hmac.compare_digest(signature, expected):
return jsonify({'error': 'Invalid signature'}), 401
data = request.json
return jsonify({'received': True}) # Process asyncRetry Logic
Failed deliveries are retried with exponential backoff:
- Attempt 1: Immediate
- Attempt 2: After 5 seconds
- Attempt 3: After 30 seconds
- Attempt 4: After 2 minutes
- Attempt 5: After 10 minutes (final)
Common Patterns
Lead Enrichment Pipeline
New lead in CRM → rtrvr extracts company data → webhook returns enriched info → update CRM record.
// Trigger: New HubSpot contact
// Action: POST to https://api.rtrvr.ai/agent
{
"input": "Visit this company website and extract: company size, industry, tech stack, and key contacts",
"urls": ["{{contact.company_website}}"],
"webhookUrl": "https://hooks.zapier.com/catch/123/enrich/",
"response": { "verbosity": "final" }
}
// Webhook receives enriched data → Update CRMBrowser Trigger → Sheet Log (Zero Server)
The simplest pattern: monitor a site for notifications and log events to Google Sheets. No server, no API, no webhook endpoint needed.
- Create a workflow that extracts data from the site and appends to a Google Sheet
- Set up a Browser Trigger on that site with appropriate filters
- Configure sheet output to "Append to same sheet on each run"
- Every matching notification adds a row — building a running log automatically
Scheduled Price Monitoring
Cron schedule → rtrvr checks competitor prices → compare with previous data → alert if changed.
// Schedule: Daily at 9am via n8n Cron node
// Action: POST to https://api.rtrvr.ai/scrape
{
"urls": [
"https://competitor1.com/pricing",
"https://competitor2.com/pricing"
]
}
// Compare extracted prices with yesterday's data
// If changed → Send Slack/email notificationBrowser Trigger + Outbound Webhook (Hybrid)
Flow:
1. Browser Trigger monitors slack.com for "deployment failed" notifications
2. Trigger fires → workflow extracts error details from the Slack thread
3. Workflow calls your server via rtrvr.ai API with webhookUrl set
4. Server receives error details → creates a Jira ticket automatically
Result: Slack notification → browser extraction → server-side ticket creation
No Slack API required — the browser does the heavy liftingAuthenticated Data Sync (MCP)
Your app triggers → rtrvr uses your logged-in browser via MCP → data synced to your database.
// Trigger: Webhook from your application
// Action: POST to https://mcp.rtrvr.ai
{
"tool": "extract",
"params": {
"user_input": "Export my order history from the last 30 days",
"tab_urls": ["https://vendor-portal.com/orders"]
},
"webhookUrl": "https://your-app.com/api/orders/sync"
}
// Your browser navigates using your login session
// Results sent to webhook → stored in databaseZapier → rtrvr → Zapier (Round-trip)
- Zap 1: New Google Form submission → POST to mcp.rtrvr.ai (include webhookUrl pointing to Zap 2)
- Zap 2: Catch Hook receives results → Add row to Google Sheets
Slack Command → rtrvr → Slack Message
app.post('/slack/commands', async (req, res) => {
const { text, response_url } = req.body;
res.status(200).json({ text: '🔄 Running extraction...' }); // Ack < 3s
await fetch('https://mcp.rtrvr.ai', {
method: 'POST',
headers: { 'Authorization': 'Bearer rtrvr_xxx', 'Content-Type': 'application/json' },
body: JSON.stringify({
tool: 'extract',
params: { user_input: text, tab_urls: [extractUrlFromText(text)] },
webhookUrl: 'https://your-server.com/slack-callback',
webhookMetadata: { response_url },
}),
});
});
app.post('/slack-callback', async (req, res) => {
const { data, originalRequest } = req.body;
const { response_url } = originalRequest.webhookMetadata;
await fetch(response_url, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
text: `✅ Extracted: ${JSON.stringify(data.extractedData, null, 2)}`,
}),
});
res.status(200).json({ received: true });
});Choosing the Right Approach
| Approach | Trigger Signal | Best For | Availability |
|---|---|---|---|
| Browser Triggers | Push notification in a tab | Sites without APIs — social media, chat, SaaS dashboards | While Chrome is open |
| Schedules | Cron / interval timer | Periodic collection, monitoring, recurring reports | Chrome or 24/7 cloud |
| Inbound Webhooks | HTTP POST from external service | Zapier / Make / n8n, CI/CD, server-to-server | 24/7 cloud |
Best Practices
- Browser Triggers: keep monitored tabs open and use specific filters to avoid false positives
- Browser Triggers: set reasonable cooldowns — chatty sites burn credits fast without them
- Browser Triggers: test with the "Test" button before relying on a trigger for critical workflows
- Webhooks: always verify signatures in production
- Webhooks: respond 200 immediately, process async
- Webhooks: use webhookMetadata to pass through context you need in the callback
- General: implement idempotency — you may receive the same event twice on retries
- General: use HTTPS endpoints only for outbound webhooks (HTTP is rejected)