Run automated security assessments against your AI applications. 20+ attack vectors, detailed vulnerability reports, and security grades-all with one click.
Comprehensive library of prompt injection, jailbreak, PII extraction, and data exfiltration attacks constantly updated with new threats.
Run a full security assessment with a single API call. Get results in seconds, not hours.
Get an overall security score (A-F) based on how many attacks your policies block. Track improvements over time.
See exactly which attacks succeeded, which failed, and why. Get specific recommendations for improving your security.
Add your own attack vectors specific to your application. Test for domain-specific vulnerabilities.
Run red team tests as part of your deployment pipeline. Fail builds if security regresses.
Select attack categories (injection, jailbreak, PII, etc.) and set your security policies.
PromptGuard runs 20+ adversarial prompts against your configuration, testing every defense.
Get a detailed report with security grade, vulnerabilities found, and specific remediation steps.
from promptguard import PromptGuard
pg = PromptGuard(api_key="your-api-key")
# Run full security assessment
report = pg.redteam.run_all_tests()
print(f"Security Grade: {report.security_grade}")
print(f"Attacks Blocked: {report.blocked}/{report.total}")
print(f"Vulnerabilities: {len(report.vulnerabilities)}")
# Get detailed results
for result in report.results:
if not result.blocked:
print(f"⚠️ VULNERABLE: {result.test_name}")
print(f" Category: {result.category}")
print(f" Fix: {result.remediation}")Run your first security assessment in under 5 minutes. See exactly how secure your AI application really is.