Automatically detect and redact PII, API keys, and sensitive information before it reaches your LLM. Synthetic data replacement keeps context intact.
Email, phone, SSN, credit cards, addresses, names, dates of birth, passport numbers, and more.
Replace real PII with realistic synthetic data to preserve context and model performance.
Automatically detect and block API keys, tokens, and secrets from being sent to LLMs.
Add your own regex patterns to detect domain-specific sensitive data like internal IDs or codes.
Choose to redact, replace with synthetic data, or block requests containing sensitive information.
GDPR, CCPA, HIPAA compliance with audit logs and data handling documentation.
ML models and pattern matching identify 14+ types of PII in incoming prompts.
Sensitive data is redacted or replaced with synthetic equivalents before reaching the LLM.
All PII detections are logged for compliance. Original data never leaves your control.
# User input with sensitive data
user_input = """
Contact me at john.doe@company.com or 555-123-4567.
My SSN is 123-45-6789 and credit card is 4111-1111-1111-1111.
"""
# PromptGuard automatically protects
response = client.chat.completions.create(
model="gpt-5-nano",
messages=[{"role": "user", "content": user_input}]
)
# What the LLM sees:
# "Contact me at [EMAIL] or [PHONE].
# My SSN is [SSN] and credit card is [CREDIT_CARD]."
# Or with synthetic replacement:
# "Contact me at jane.smith@example.com or 555-987-6543..."Stop PII from leaking to LLMs. Automatic detection and protection in every request.