
Zero-Friction AI Security: From Vulnerable to Protected in One Command
A developer spent three hours manually updating 47 files to add security headers to their OpenAI calls. They missed file #48. File #48 was the one that got breached.
This is the Integration Gap—the distance between "we have a security tool" and "our entire application is actually protected." Security tools fail not because they're bad at detecting threats, but because they're hard to install consistently across a real codebase.
We obsessed over closing this gap. Here's every integration path we built, from "change one line" to "full SDK coverage," and why we believe security adoption is an engineering problem, not a security problem.
The One-Line Integration
The fastest path to protection is the proxy pattern. PromptGuard is wire-compatible with the OpenAI API, so you don't need a new SDK, a new client library, or a code refactor. You just change the URL.
Python (OpenAI SDK):
from openai import OpenAI
client = OpenAI(
api_key=os.environ["OPENAI_API_KEY"],
base_url="https://api.promptguard.co/api/v1/proxy",
default_headers={"X-API-Key": os.environ["PROMPTGUARD_API_KEY"]}
)
# Everything else stays exactly the same
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": user_input}]
)TypeScript (OpenAI SDK):
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: 'https://api.promptguard.co/api/v1/proxy',
defaultHeaders: { 'X-API-Key': process.env.PROMPTGUARD_API_KEY }
});
// Everything else stays exactly the same
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: userInput }]
});LangChain:
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4o",
base_url="https://api.promptguard.co/api/v1/proxy",
default_headers={"X-API-Key": os.environ["PROMPTGUARD_API_KEY"]}
)cURL:
curl https://api.promptguard.co/api/v1/proxy/chat/completions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "X-API-Key: $PROMPTGUARD_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'The moment you change the URL, every request through that client is protected. Injection attempts are blocked. PII is redacted. Toxic content is filtered. And every response comes back with security metadata in the headers:
X-PromptGuard-Event-ID: evt_abc123
X-PromptGuard-Decision: allow
X-PromptGuard-Confidence: 0.02No code changes. No try-catch wrappers. No middleware. One line.
The SDK Integration
For applications that need more control—scanning content before it reaches the LLM, validating agent tool calls, or integrating with custom pipelines—we ship SDKs for Python and TypeScript.
Python SDK
pip install promptguard-sdkfrom promptguard import PromptGuard
pg = PromptGuard(api_key=os.environ["PROMPTGUARD_API_KEY"])
# Chat completions (proxied through PromptGuard)
response = pg.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": user_input}]
)
# Direct security scanning
scan = pg.security.scan(content=user_input, content_type="user_input")
if scan.blocked:
print(f"Threat detected: {scan.reason} ({scan.confidence})")
# PII redaction
redacted = pg.security.redact(
content="My SSN is 123-45-6789",
pii_types=["ssn", "credit_card"]
)
# redacted.redacted == "My SSN is [SSN_REDACTED]"
# Agent tool validation
validation = pg.agent.validate_tool(
agent_id="my-agent",
tool_name="query_database",
arguments={"sql": "SELECT * FROM users"},
session_id="session-123"
)
# Red team testing
report = pg.redteam.run_all(target_preset="support_bot:strict")
print(f"Security Score: {report.score}/100")
# Secure web scraping
result = pg.scrape.url("https://example.com/article", extract_text=True)TypeScript SDK
npm install promptguard-sdkimport { PromptGuard } from 'promptguard-sdk';
const pg = new PromptGuard({ apiKey: process.env.PROMPTGUARD_API_KEY });
// Same API surface as Python
const response = await pg.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: userInput }]
});
const scan = await pg.security.scan({
content: userInput,
contentType: 'user_input'
});
const validation = await pg.agent.validateTool({
agentId: 'my-agent',
toolName: 'send_email',
arguments: { to: 'user@example.com', body: '...' },
sessionId: 'session-123'
});Both SDKs support async operation, automatic retries (3 attempts with exponential backoff), configurable timeouts, and streaming responses.
Self-Hosted Deployment
For teams that need data sovereignty—or just don't want their prompts leaving their network—the entire PromptGuard stack runs in a single Docker Compose file.
git clone https://github.com/acebot712/promptguard
cd promptguard/deploy
docker-compose up -dThis starts five containers: the API server (FastAPI), the dashboard (Next.js), PostgreSQL, Redis, and Nginx for TLS termination. Once it's running, point your base_url at http://localhost:8080/api/v1/proxy instead of the hosted URL. Same integration, same one-line change, but nothing leaves your infrastructure.
Updates are a single command:
docker-compose pull && docker-compose up -dNo agents to install, no kernel modules to load, no custom DNS to configure. It's a container. It runs where containers run.
Environment-Based Configuration
For teams managing multiple environments, we recommend setting the PromptGuard URL via environment variables:
# .env.development
OPENAI_BASE_URL=https://api.openai.com/v1 # Direct to OpenAI in dev
# .env.staging
OPENAI_BASE_URL=https://api.promptguard.co/api/v1/proxy # Through PromptGuard
# .env.production
OPENAI_BASE_URL=https://promptguard.internal:8080/api/v1/proxy # Self-hostedclient = OpenAI(
base_url=os.environ["OPENAI_BASE_URL"],
default_headers={"X-API-Key": os.environ.get("PROMPTGUARD_API_KEY", "")}
)Now your security posture is controlled by configuration, not code. Flip an environment variable, and every LLM call in your application is secured. Flip it back, and you're calling OpenAI directly. This is useful for debugging—if you suspect PromptGuard is causing an issue, you can bypass it instantly without a code change.
Project-Level Configuration
Once traffic is flowing through PromptGuard, you configure security policies in the dashboard—not in your application code.
Use-case presets set baseline security for your application type:
support_bot:strict— High PII protection, blocks credential sharing, strict injection detectioncode_assistant:moderate— Allows code patterns, blocks API keys, moderate injection detectionrag_system:strict— Blocks data extraction, limits domain access, strict exfiltration detectioncreative_writing:permissive— Relaxed content filters, focuses on hate speech and violence only
Custom rules let you add domain-specific patterns without touching the codebase. A fintech app can add regex rules for proprietary account number formats. A healthcare app can flag specific drug names or procedure codes.
Custom policies provide condition-based filtering with priority ordering. You can create policies that filter inputs, filter outputs, or transform content based on any combination of threat signals.
All of this is managed through the dashboard or API. Your application code doesn't change. Your deployment doesn't change. Your CI/CD pipeline doesn't change. Security is a configuration layer, not an application layer.
The Integration Gap, Closed
The most common reason security tools go unused isn't that they don't work. It's that they're too hard to install, too hard to configure, and too hard to debug when something goes wrong.
We built PromptGuard to eliminate each of those frictions:
- Install: Change one URL, or run one
docker-compose up. - Configure: Use a preset, or build custom policies in the dashboard.
- Debug: Every decision includes an event ID, confidence score, and threat type in the response headers. Full audit logs in the dashboard.
Security that gets skipped is worse than no security at all—it gives you false confidence. The only security that matters is security that's actually deployed, across every LLM call, in every service, in every environment.
One line of code. That's the bar we set for ourselves. That's the bar we hit.
READ MORE

Beyond Redaction: Why We Replace PII With Synthetic Data
Redacting PII with [SSN_REDACTED] breaks the LLM's ability to reason about data. Replacing it with realistic-looking fake data preserves the reasoning while eliminating the privacy risk. Here's how synthetic data replacement works and when to use it.

Multi-Provider Failover: How to Keep Your AI App Running When OpenAI Goes Down
When OpenAI has a 30-minute outage, your AI application doesn't have to go down with it. Here's how PromptGuard's SmartRouter automatically fails over across providers—OpenAI, Anthropic, Gemini, Mistral, Groq, and Azure—without your users noticing.

Shadow Mode: How to Test AI Security Changes Without Breaking Production
Deploying a new security model is terrifying—what if it blocks your best customers? Shadow mode runs the new config alongside production on live traffic, logs disagreements, and lets you validate changes before they affect a single user.