Real-time detection of prompt injection, PII leaks, data exfiltration, toxicity, fraud & abuse, secret key leaks, malware, URL threats, jailbreak attempts, and tool injection. Ten security guardrails covered out of the box.
ML-powered classification detects injection attempts including instruction override, role manipulation, context breaking, and jailbreak attempts.
Detect and protect 39+ entity types of personally identifiable information across 10+ countries with checksum validation.
Detect and block attempts to extract system prompts, training data, or other sensitive information.
Block harmful, inappropriate, or policy-violating content with configurable severity thresholds.
Automatically detect and redact API keys, secrets, and credentials with entropy analysis before they reach the LLM.
Detect and block malicious, phishing, or unauthorized URLs in prompts and responses to prevent data exfiltration via external links.
Identify and rate-limit automated abuse, bot attacks, and fraudulent behavior through behavioral analysis and request fingerprinting.
Block malicious code, destructive commands, and potentially harmful instructions before they reach the model.
LLM-powered jailbreak detection catches sophisticated bypass attempts that evade traditional pattern matching, including multi-turn and encoded attacks.
Detect and block attempts to inject malicious tool calls or manipulate agent tool usage through crafted prompts.
Every request passes through PromptGuard's security layer before reaching your LLM provider.
ML models and pattern matching analyze the request across ten security guardrails. Typical latency is ~0.15s, with complex analysis taking 1-3 seconds.
Malicious requests are blocked, logged, and alerted. Safe requests pass through unmodified.
from openai import OpenAI
# Just change your base URL - that's it!
client = OpenAI(
base_url="https://api.promptguard.co/api/v1",
api_key="your-openai-key",
default_headers={
"X-API-Key": "your-promptguard-key"
}
)
# All requests are now protected
response = client.chat.completions.create(
model="gpt-5-nano",
messages=[{"role": "user", "content": user_input}]
)
# Malicious prompts are automatically blocked
# No code changes needed!Start blocking prompt injection and other threats in under 2 minutes. No code changes required.