Real-time detection of prompt injection, PII leaks, data exfiltration, toxicity, fraud & abuse, API key leaks, and malware. Seven threat types covered out of the box.
ML-powered classification detects injection attempts including instruction override, role manipulation, context breaking, and jailbreak attempts.
Detect and protect personally identifiable information including SSNs, credit cards, phone numbers, email addresses, and other sensitive data.
Detect and block attempts to extract system prompts, training data, or other sensitive information.
Block harmful, inappropriate, or policy-violating content with configurable severity thresholds.
Identify and rate-limit automated abuse, bot attacks, and fraudulent behavior through behavioral analysis and request fingerprinting.
Automatically detect and redact API keys, secrets, and credentials before they reach the LLM.
Block malicious code, destructive commands, and potentially harmful instructions before they reach the model.
Every request passes through PromptGuard's security layer before reaching your LLM provider.
ML models and pattern matching analyze the request for seven threat types. Typical latency is ~0.15s, with complex analysis taking 1-3 seconds.
Malicious requests are blocked, logged, and alerted. Safe requests pass through unmodified.
from openai import OpenAI
# Just change your base URL - that's it!
client = OpenAI(
base_url="https://api.promptguard.co/api/v1",
api_key="your-openai-key",
default_headers={
"X-API-Key": "your-promptguard-key"
}
)
# All requests are now protected
response = client.chat.completions.create(
model="gpt-5-nano",
messages=[{"role": "user", "content": user_input}]
)
# Malicious prompts are automatically blocked
# No code changes needed!Start blocking prompt injection and other threats in under 2 minutes. No code changes required.