Back to all articles
Self-HostingInfrastructurePrivacy

Why We Self-Host Our Security Stack (And You Should Too)

Sending your data to a security vendor is an oxymoron. If you care about privacy, your security tools should run in your VPC.

Why We Self-Host Our Security Stack (And You Should Too)

Why We Self-Host Our Security Stack

There is a strange trend in the AI security market. Companies say: "We are worried about sending data to OpenAI." So they buy a security tool... and send all their data to that security vendor.

You have just traded one third-party risk for another.

The "Data Sovereignty" Problem

If you are a bank, a hospital, or a defense contractor, Data Residency is not a suggestion. It is law. You cannot pipe your customer data through a startup's cloud in US-East-1 just to check for prompt injection.

This is why we architected PromptGuard to be Self-Hostable First.

The Architecture of Control

We ship a Docker container.

  • It runs in Your VPC.
  • It talks to Your Redis.
  • It logs to Your Postgres.

No data ever leaves your infrastructure. We (PromptGuard Inc.) never see your prompts. We don't even know you're running it.

Latency Physics

There is also a physics problem. If your app is in aws-us-west-2 and your security vendor is in gcp-us-central1, you are adding:

  1. TLS Handshake overhead.
  2. Cross-cloud network latency (30ms+).
  3. Serialization overhead.

If you run the security sidecar next to your application container (e.g., in the same Kubernetes pod), the latency is sub-millisecond.

"But maintaining it is hard!"

It used to be. But with modern tooling (Helm, Docker Compose), running a stateless API container is trivial. The "updates" are just pulling a new image tag.

We believe the future of enterprise software is Bring Your Own Cloud (BYOC). SaaS is great for non-critical tools. But for the infrastructure that sits in the critical path of your user data? Own the metal.