Engineering

Why Access Control Is Not Governance: Four Capabilities Every AI Security Platform Is Missing

Most AI security tools stop at 'is this request allowed?' That answers one question. Governance answers four harder ones: who is this agent, has its behavior changed, can we prove the audit trail is intact, and can we demonstrate compliance to an auditor?

PromptGuardPromptGuard
4 min read·
AI GovernanceAgent IdentityBehavioral DriftAudit TrailCompliance

Why Access Control Is Not Governance

Every AI security platform on the market can answer one question: is this request allowed?

That's access control. It's necessary but insufficient.

Governance answers four harder questions that no platform — until now — has actually implemented:

  1. Who is this agent? Not "what agent_id string was passed" but "can we cryptographically verify this agent's identity?"
  2. Is this agent behaving normally? Not "is this individual request safe" but "has this agent's overall behavior pattern changed?"
  3. Has anyone tampered with the audit trail? Not "do we have logs" but "can we mathematically prove no log entry has been modified, inserted, or deleted?"
  4. Can we prove compliance to an auditor? Not "here's a data dump" but "here's a narrative that maps controls to evidence."

These aren't theoretical gaps. They're the difference between "we have security tooling" and "we can pass an audit."

The Identity Problem

Most platforms accept a free-form agent_id string. Anyone with the project API key can claim to be any agent. If your customer support bot and your internal admin bot share an API key, there's nothing stopping a compromised prompt from impersonating the admin bot.

PromptGuard now supports verified agent credentials. Each agent is registered and receives a cryptographic secret (pgag_...) that authenticates every request. The secret is bcrypt-hashed and stored; only the prefix is visible. Requests without credentials still work — backward compatibility is non-negotiable — but verified requests get explicit identity confirmation in every audit log entry.

The Drift Problem

An agent that normally calls search 60% of the time and calculator 40% of the time is predictable. If it suddenly starts calling send_email and http_post exclusively, something has changed.

Per-request validation will see each individual call as safe. Only stateful behavioral analysis catches the pattern shift.

We freeze a behavioral baseline after sufficient observations and compare ongoing behavior using Jensen-Shannon divergence — symmetric, bounded 0–1, no division-by-zero edge cases. When the divergence exceeds the threshold, a BEHAVIORAL_DRIFT alert fires with the exact tools that changed and by how much.

The Tamper Problem

Most audit systems hash individual events for integrity. But if you delete an event from the middle of the log, the remaining events still verify fine — their individual hashes are intact.

A hash chain solves this. Each event's SHA-256 hash incorporates the previous event's hash. Delete or modify any event and the chain breaks. We expose a verification endpoint that walks the chain and reports exactly where (if anywhere) integrity was lost.

This is not a Merkle tree. It's simpler and sufficient: a linear chain where each link depends on its predecessor. The tradeoff is O(n) verification time vs O(log n) for Merkle trees, but audit log verification is not a hot path — it runs during compliance reviews, not during request processing.

The Reporting Problem

Compliance auditors don't read JSON blobs. They need narratives: "During Q1, 47 agents held active credentials. 3 agents triggered drift alerts. The audit chain was verified intact across 128,000 events. 12 requests were blocked for PII exposure."

PromptGuard now generates governance reports with sections for agent identity, behavioral drift, audit chain integrity, security decisions, and an incident timeline. The reports map to SOC 2, EU AI Act, and NIST AI RMF frameworks.

What We Built

CapabilityImplementationWhy It Matters
Agent IdentityAgentCredential model, pgag_ secrets, bcrypt storage, rotation endpointMoves from "anyone can claim any identity" to "cryptographically verified"
Behavioral DriftFrozen baselines, Jensen-Shannon divergence, configurable thresholdsCatches pattern shifts that per-request validation misses
Tamper-Evident Auditprevious_hash chain, verify_chain() endpointMathematical proof that no log entry was altered
Governance ReportsNarrative sections, framework mapping, incident timelineGives auditors what they actually need

All four are live in production today. The implementation is additive — no existing API contracts changed, no breaking changes to SDKs, no migration required for existing customers.

Try It

All four capabilities are available now:

  • Register an agent: POST /api/v1/agent/register
  • Verify the audit chain: POST /dashboard/audit-log/verify-chain
  • Generate a governance report: POST /dashboard/compliance/governance-report
  • Behavioral drift: Automatic after 50+ tool calls per agent (configurable)

Read the full governance documentation for integration details.