Connect to all major providers and 50+ models through a single integration. Direct API, cloud gateways, enterprise platforms, and self-hosted, all with unified security and zero vendor lock-in.
Switch providers with a config change. Compare performance and cost across vendors in real-time.
Keep your negotiated enterprise pricing. Store keys encrypted, rotate without code changes.
Use PromptGuard-managed credentials. No API keys to manage or rotate yourself.
Connect to local Ollama instances, vLLM deployments, or other self-hosted models. Complete data control and privacy.
Every provider, every model protected by the same threat detection, PII scanning, and jailbreak prevention.
First-class integration with OpenClaw, LangChain, LangGraph, CrewAI, Pydantic AI, OpenAI Agents SDK, and Vercel AI SDK.
Use GCP Service Account, AWS IAM, AWS Bedrock, or Azure AD credentials for enterprise SSO and secure access.
Configure your API keys or connect your cloud accounts securely in the PromptGuard dashboard.
Change one line of code in your application to point to the PromptGuard proxy URL.
We automatically route requests to the best provider, scan for threats, and cache responses.
import OpenAI from 'openai';
// 1. Point to PromptGuard
const client = new OpenAI({
baseURL: 'https://api.promptguard.co/api/v1',
apiKey: process.env.OPENAI_API_KEY, // Or any other provider key
defaultHeaders: {
'X-API-Key': process.env.PROMPTGUARD_API_KEY
}
});
// 2. Just specify the model you want
const response = await client.chat.completions.create({
// Want Anthropic instead? Just change the model name!
// model: "claude-3-opus-20240229",
model: "gpt-5-nano",
messages: [{ role: 'user', content: 'Hello!' }]
});
// Security, caching, and observability are automatically applied.Start routing to any AI provider today with built-in security, caching, and observability.