One MCP Server to Secure Every AI Tool
The AI tooling landscape is fragmented. Your team uses Cursor. The backend engineer prefers Claude Code. The designer builds prototypes in Lovable. The intern discovered Cline last week and won't stop talking about it.
Each tool has its own plugin ecosystem, its own extension format, its own way of doing things. Building security integrations for each one means maintaining a dozen different plugins, each with its own bugs, its own release cycle, and its own configuration format.
Unless you use MCP.
What Is MCP?
Model Context Protocol is an open standard created by Anthropic for connecting AI assistants to external tools. Think of it as USB for AI tools: one standard connector, any device.
An MCP server exposes a set of tools that any MCP-compatible client can call. The protocol handles discovery, parameter validation, and result formatting. The client doesn't need to know anything about the server's implementation, and the server doesn't need to know anything about the client.
This means we build one MCP server, and it works everywhere.
What PromptGuard's MCP Server Does
When you connect PromptGuard as an MCP server, your AI agent gets six security tools:
promptguard_scan_text scans any text for prompt injection, jailbreaks, toxicity, PII, data exfiltration attempts, and other threats. The agent can call this on user inputs, retrieved documents, or generated outputs.
promptguard_redact strips PII from text before it reaches the LLM. Social security numbers, credit card numbers, email addresses, phone numbers, and 39+ other entity types are replaced with safe placeholders.
promptguard_scan_project scans your codebase for unprotected LLM SDK calls. It finds every place you call OpenAI, Anthropic, Google, or Cohere without PromptGuard's proxy, and tells you the exact file, line, and column.
promptguard_auth handles authentication. If you haven't configured an API key yet, the agent calls this and walks you through the setup.
promptguard_logout clears stored credentials.
promptguard_status checks your current configuration: API key type, proxy URL, configured providers, and CLI version.
Every Tool, Same Config
Here's the beautiful part. The MCP configuration for PromptGuard is identical across almost every tool:
{
"command": "promptguard",
"args": ["mcp", "-t", "stdio"]
}That's it. The PromptGuard CLI implements the MCP stdio transport natively in Rust. No Node.js runtime, no Python dependencies, no wrapper scripts. The binary starts in under 10ms and speaks JSON-RPC 2.0 over stdin/stdout.
The only thing that changes is where each tool stores its MCP configuration.
Supported Tools
Code Editors and AI Assistants
Cursor is where most developers first encounter MCP. Add the config to .cursor/mcp.json and every Cursor agent session has access to PromptGuard's security tools. There's also a one-click install link.
Claude Desktop and Claude Code have full MCP support. Claude Code makes it even simpler: claude mcp add promptguard -- promptguard mcp -t stdio.
VS Code GitHub Copilot added MCP support in VS Code 1.99. Add the server to your settings.json under github.copilot.chat.mcp.servers and Copilot Chat gains security scanning capabilities.
Windsurf supports MCP servers in its AI Flow feature. Add the config to ~/.windsurf/mcp_config.json.
Zed supports MCP through its context servers feature, configured in Zed's settings.
Coding Agents
Cline is one of the most popular open-source coding agents, running as a VS Code extension. It supports MCP tools and resources, so it can both call PromptGuard's scan tools and access security context.
Roo Code (a fork of Cline with additional features) uses the same MCP configuration format.
Continue is an open-source autopilot for VS Code and JetBrains. It has full MCP support including resources, prompts, and tools.
Goose is Block's (formerly Square) open-source AI agent. Configure MCP servers through goose configure or directly in its config YAML.
App Builders
Lovable supports custom MCP servers as "personal connectors." Add PromptGuard as a custom MCP server in workspace settings, and Lovable's agent can scan prompts and redact PII during app creation.
Microsoft Copilot Studio is adding MCP server support for enterprise agent builders. PromptGuard's stdio server is compatible.
Chat Interfaces
ChatGPT uses a different transport (HTTP with OAuth) rather than stdio. PromptGuard has a dedicated HTTP MCP server for ChatGPT, documented separately.
LibreChat is an open-source ChatGPT alternative with MCP tool support. The stdio configuration works directly.
Why MCP Instead of Custom Plugins?
We used to build custom integrations for each tool. A VS Code extension here, a Cursor plugin there, a ChatGPT plugin somewhere else. Each one required separate:
- Authentication flows
- Configuration formats
- Error handling
- Testing
- Documentation
- Release cycles
MCP eliminates all of this. We build one server binary, test it once, and it works with every MCP-compatible tool. When we add a new security feature (like the content safety classifier we shipped last month), every tool gets it simultaneously without any plugin updates.
The MCP ecosystem has over 10,000 indexed servers as of 2026. This is not a niche standard. It is the way AI tools connect to external services.
Setup Takes 60 Seconds
- Install the CLI:
brew tap promptguard/tap && brew install promptguard- Add your API key:
promptguard init --api-key pg_sk_prod_YOUR_KEY- Add the MCP config to your tool (the JSON snippet above).
That's it. No OAuth flows, no browser redirects, no webhook configuration. The agent authenticates with your locally stored API key and calls the PromptGuard API for scans and redactions.
Every scan is billed against your PromptGuard plan (Free: 10K requests/month, Pro: 100K, Scale: 1M). The agent handles rate limiting and error responses gracefully.
What You Can Do With It
Once connected, try these prompts in your AI tool:
- "Scan this user message for prompt injection before I process it"
- "Redact any PII from this customer email before adding it to the context"
- "Scan this project for unprotected OpenAI calls"
- "Check if this prompt is trying to exfiltrate data"
- "What's my PromptGuard configuration status?"
The agent calls the right PromptGuard tool automatically based on your intent.
The Future Is One Protocol
MCP is doing for AI tools what REST did for web services and what USB did for peripherals: making the connection universal so builders can focus on the capability, not the plumbing.
We're committed to MCP as our primary integration surface. Every new detection capability we ship is automatically available to every MCP client. No plugin updates, no version compatibility matrices, no "works in Cursor but not in Claude" situations.
Install once. Secure everything.
Continue Reading
How Our Multi-Model ML Ensemble Detects Attacks Without Adding Latency
A technical deep dive into how PromptGuard's ensemble of specialized classifiers detects threats — covering parallel inference, weighted voting, category-specific thresholds, confidence calibration, and why multiple small models beat one large one.
Read more EngineeringSecuring LangChain Applications: The Complete Guide
LangChain makes it easy to build powerful agents. It also makes it easy to build security vulnerabilities. Here's how to add production-grade security to your chains, agents, and RAG pipelines without rewriting your application.
Read more EngineeringWhy Support Bots Are Your Biggest Security Hole (And How to Fix It)
We've watched helpfully trained bots email transaction histories to strangers, issue unauthorized refunds, and leak internal system prompts-all without a single 'jailbreak' keyword. Here's the three-layer defense architecture that actually secures customer support AI.
Read more