Run automated security assessments against your AI applications. 20+ attack vectors, detailed vulnerability reports, and security grades-all with one click.
Comprehensive library of prompt injection, jailbreak, PII extraction, and data exfiltration attacks constantly updated with new threats.
Run a full security assessment with a single API call. Get results in seconds, not hours.
Get an overall security score (A-F) based on how many attacks your policies block. Track improvements over time.
See exactly which attacks succeeded, which failed, and why. Get specific recommendations for improving your security.
Add your own attack vectors specific to your application. Test for domain-specific vulnerabilities.
Run red team tests as part of your deployment pipeline. Fail builds if security regresses.
LLM-powered adversarial search that intelligently mutates attacks to discover novel bypass vectors. Configurable iteration budget (1–1000) for depth vs. speed trade-offs.
Every autonomous run produces a graded report (A–F) with actionable recommendations. Track your security posture over time with the Attack Intelligence database.
Successful bypass patterns are stored and tracked across runs. Build institutional knowledge of your application's vulnerability surface.
Select attack categories (injection, jailbreak, PII, etc.) and set your security policies.
PromptGuard runs 20+ adversarial prompts against your configuration, testing every defense.
Get a detailed report with security grade, vulnerabilities found, and specific remediation steps.
from promptguard import PromptGuard
pg = PromptGuard(api_key="your-api-key")
# Run full security assessment
report = pg.redteam.run_all_tests()
print(f"Security Grade: {report.security_grade}")
print(f"Attacks Blocked: {report.blocked}/{report.total}")
print(f"Vulnerabilities: {len(report.vulnerabilities)}")
# Run autonomous red team agent (LLM-powered mutation)
auto_report = pg.redteam.run_autonomous(
iterations=100, # 1-1000: depth vs speed
)
print(f"Novel Attacks Found: {auto_report.novel_bypasses}")
print(f"Grade: {auto_report.security_grade}")
# Also available via CLI:
# promptguard redteam --autonomous --iterations 100Run your first security assessment in under 5 minutes. See exactly how secure your AI application really is.