
From Alert to Action: Setting Up Real-Time Threat Notifications
Your security tool just blocked a prompt injection attempt against your production support bot. Do you know about it?
If the answer is "I'll check the dashboard tomorrow morning"—you have a visibility problem.
Security events are time-sensitive. A blocked injection might be a one-off from a curious user. Or it might be the first probe in a systematic attack. The difference is in the pattern, and you can only see the pattern if you know about each event as it happens.
PromptGuard supports two alerting channels: email and webhooks. This guide focuses on webhooks, which give you real-time notifications with structured data you can route, filter, and act on.
The Webhook Payload
When PromptGuard blocks a request (or detects a threat with high confidence), it fires a webhook to your configured URL. The payload is JSON and designed to be Slack-compatible out of the box:
{
"event": "threat_detected",
"threat_type": "prompt_injection",
"decision": "block",
"confidence": 0.94,
"reason": "Prompt injection detected: instruction override pattern with 94% confidence",
"project": "Support Bot v3",
"timestamp": "2026-01-25T14:32:17Z",
"text": "[PromptGuard] Prompt Injection in *Support Bot v3* (94%, block)"
}The text field is formatted for Slack's markdown syntax (bold with *asterisks*). If you point the webhook at a Slack incoming webhook URL, it just works—no transformation needed.
Setting Up Slack Alerts (5 Minutes)
Step 1: Create a Slack Incoming Webhook
- Go to api.slack.com/apps
- Create a new app (or use an existing one)
- Enable "Incoming Webhooks"
- Add a webhook to your desired channel (e.g.,
#security-alerts) - Copy the webhook URL:
https://hooks.slack.com/services/T.../B.../xxx
Step 2: Configure in PromptGuard
In the PromptGuard dashboard:
- Navigate to your project settings
- Paste the Slack webhook URL in the "Webhook URL" field
- Save
That's it. The next time PromptGuard blocks a request for this project, you'll see a message in your Slack channel:
[PromptGuard] Prompt Injection in Support Bot v3 (94%, block)Step 3: Customize the Channel
For production applications, we recommend separate channels for different severity levels:
#security-critical— High-confidence blocks (>0.90), data exfiltration attempts, fraud detection#security-alerts— All blocks and high-confidence detections#security-log— All events including redactions (high volume)
You can route to different channels using a webhook relay (like Zapier, n8n, or a simple serverless function) that inspects the confidence and threat_type fields and routes accordingly.
Setting Up Custom Webhook Handlers
For non-Slack destinations—PagerDuty, Opsgenie, Discord, Microsoft Teams, or your own backend—you can build a handler that processes the webhook payload.
Example: Express.js Webhook Handler
import express from 'express';
const app = express();
app.use(express.json());
app.post('/promptguard-webhook', (req, res) => {
const { event, threat_type, decision, confidence, project, reason } = req.body;
console.log(`[${project}] ${threat_type}: ${decision} (${confidence})`);
// Route based on severity
if (confidence >= 0.90 && decision === 'block') {
// Critical: page the on-call engineer
pagerduty.trigger({
summary: `High-confidence ${threat_type} blocked in ${project}`,
severity: 'critical',
details: req.body
});
} else if (decision === 'block') {
// Standard: log to security dashboard
securityDashboard.log(req.body);
}
// Always acknowledge
res.status(200).json({ received: true });
});Example: Python Webhook Handler
from fastapi import FastAPI, Request
app = FastAPI()
@app.post("/promptguard-webhook")
async def handle_webhook(request: Request):
payload = await request.json()
threat_type = payload.get("threat_type")
confidence = payload.get("confidence", 0)
decision = payload.get("decision")
project = payload.get("project")
# Log all events
logger.info(
f"[{project}] {threat_type}: {decision} "
f"(confidence: {confidence})"
)
# Escalate high-severity events
if confidence >= 0.90 and decision == "block":
await notify_security_team(payload)
# Track patterns
if threat_type == "prompt_injection":
await increment_injection_counter(project)
return {"status": "ok"}Building a Threat Response Workflow
Raw alerts are useful. A structured response workflow is better.
Level 1: Automated Logging
Every webhook event should be logged to your security information system. This creates an audit trail for compliance and enables pattern analysis.
# Log to your SIEM, database, or analytics platform
async def log_security_event(payload):
await db.security_events.insert({
"timestamp": payload["timestamp"],
"project": payload["project"],
"threat_type": payload["threat_type"],
"decision": payload["decision"],
"confidence": payload["confidence"],
"reason": payload["reason"]
})Level 2: Pattern Detection
Individual events are less informative than patterns. Set up rules to detect:
Sustained attacks: 10+ blocks from the same project in 5 minutes suggests a coordinated attack, not a random user.
async def check_attack_pattern(payload):
recent_count = await db.security_events.count({
"project": payload["project"],
"timestamp": {"$gte": five_minutes_ago()},
"decision": "block"
})
if recent_count >= 10:
await escalate("Sustained attack detected", payload)New threat types: If you see data_exfiltration for the first time in a project that usually only triggers prompt_injection, someone is escalating their attack sophistication.
Confidence drift: If average block confidence is dropping over time, attackers may be finding increasingly subtle evasion techniques. This is a signal to review your detection configuration.
Level 3: Automated Response
For high-confidence, repeated attacks, consider automated responses:
async def auto_respond(payload):
if payload["confidence"] >= 0.95 and payload["decision"] == "block":
# Check if this is a repeat offender
recent_blocks = await count_blocks_from_same_source(
payload, window_minutes=60
)
if recent_blocks >= 5:
# Automatically enable rate limiting for this project
await enable_strict_rate_limiting(payload["project"])
await notify_team(
f"Auto-enabled strict rate limiting for {payload['project']} "
f"after {recent_blocks} high-confidence blocks in 1 hour"
)Email Alerts
In addition to webhooks, PromptGuard sends email alerts to the project owner. Email alerting is configured per-user with three levels:
| Level | Behavior |
|---|---|
all | Email for every block and high-confidence detection |
critical | Email only for high-confidence blocks (>0.90) |
none | No email alerts (webhook only) |
For most teams, we recommend critical for email (to avoid alert fatigue) and webhooks for everything else (to maintain real-time visibility without inbox overload).
Monitoring Your Alerting Pipeline
Alerting systems have a meta-problem: how do you know your alerts are working?
Webhook delivery failures are logged but silent—if your webhook endpoint is down, you won't receive the alert telling you it's down.
We recommend:
- Health check your webhook endpoint independently (UptimeRobot, Pingdom, etc.)
- Send a test alert weekly to verify the full pipeline works
- Monitor alert volume — a sudden drop to zero might mean your webhook is broken, not that attacks stopped
- Set up a dead man's switch — if you don't receive at least one alert per day (even a test), escalate
Conclusion
The gap between "threat detected" and "threat responded to" is where damage happens. A blocked injection at 2 AM is a data point. A Slack message at 2 AM is actionable intelligence.
Set up webhooks. Route them to Slack. Build pattern detection. Automate responses for high-confidence repeated attacks. And test your alerting pipeline regularly—because the worst time to discover your alerts are broken is during an actual incident.
Security isn't just about blocking threats. It's about knowing you're blocking them.
READ MORE

Secure Your AI App in 5 Minutes: The Complete Integration Guide
PromptGuard is wire-compatible with the OpenAI API. Change one URL and every LLM call in your application is protected by a 7-detector security pipeline. Here's the step-by-step guide for Python, TypeScript, LangChain, and cURL.

Why Your AI Security Should Run in Your VPC (And How to Set It Up)
Sending your user prompts to a security vendor defeats the purpose of security. Here's why we built PromptGuard to be self-hostable first, and a complete guide to deploying it in your own infrastructure.

Securing LangChain Applications: The Complete Guide
LangChain makes it easy to build powerful agents. It also makes it easy to build security vulnerabilities. Here's how to add production-grade security to your chains, agents, and RAG pipelines without rewriting your application.