Back to all articles
FinTechSecurityPCI

PCI-DSS for AI: Don't Let Your Chatbot Touch Credit Cards

The moment your AI agent sees a credit card number, your entire compliance scope explodes. Here's how to architect AI-powered financial services that keep PANs out of the LLM context, pass PCI audits, and actually work.

PCI-DSS for AI: Don't Let Your Chatbot Touch Credit Cards

PCI-DSS for AI: Don't Let Your Chatbot Touch Credit Cards

Here is the fastest way to fail a PCI audit: let your AI agent take payments.

We see this pattern in production with alarming frequency:

  1. User: "I want to update my billing."
  2. Bot: "Sure, what's your new card number?"
  3. User: "4111 1111 1111 1111, exp 12/28, CVV 123."

Congratulations. Your LLM provider, your vector database, your logging service, your chat history database, and your monitoring stack are now in scope for PCI-DSS Level 1 compliance. You have roughly 300 new controls to implement, a Qualified Security Assessor (QSA) to hire, and an annual audit to pass.

All because your chatbot asked for a card number in a text field.

Why LLMs and PANs Don't Mix

PCI-DSS (Payment Card Industry Data Security Standard) has a simple rule: any system that stores, processes, or transmits cardholder data is in scope. The scope isn't limited to databases. It includes every system in the data flow—every service that touches the PAN, every log that contains it, every network segment it traverses.

When a user types their card number into a chat interface, the PAN flows through:

  1. Your frontend (JavaScript, stored in DOM)
  2. Your API (HTTP request body)
  3. PromptGuard or any middleware (request inspection)
  4. The LLM provider (prompt context)
  5. The LLM provider's logs (inference logging)
  6. Your chat history database (conversation persistence)
  7. Your monitoring/logging stack (request traces)
  8. Your analytics pipeline (if you're tracking conversations)

Each of these systems is now in PCI-DSS scope. Each needs its own set of controls: encryption at rest, encryption in transit, access controls, audit logging, vulnerability scanning, penetration testing.

The compliance cost of letting a chatbot see one card number can exceed $100,000/year in audit and remediation costs.

The Zero-Trust Architecture for PANs

The only winning move is not to play. Primary Account Numbers should never enter your LLM context. Not encrypted. Not tokenized. Not at all.

Strategy 1: The Regex Firewall

Before any user message reaches your LLM, it must pass through a PAN detector.

PromptGuard's PII detector includes credit card detection with Luhn validation—a mathematical checksum that eliminates false positives from random number sequences. We match Visa (4xxx), Mastercard (51-55xx), Amex (34/37xx), and Discover (6011/65xx) formats with valid card lengths (13, 14, 15, 16, 19 digits).

When a Luhn-valid card number is detected, you have two options:

Redact and continue: Replace the PAN with [CARD_REDACTED] and forward the sanitized message to the LLM. The user's intent ("update billing") is preserved, but the card number never reaches the LLM provider.

User input: "My new card is 4111 1111 1111 1111"
Redacted:   "My new card is [CARD_REDACTED]"
LLM sees:   "My new card is [CARD_REDACTED]"

Block and redirect: Return a message directing the user to a secure payment form:

"I cannot accept card numbers in chat. Please use the secure
payment form: [link to Stripe/Adyen checkout]"

Strategy 2: The Secure Form Pattern

Instead of handling payment data in the chat flow, the AI should redirect to a PCI-compliant payment widget (Stripe Elements, Adyen Drop-in, Braintree Hosted Fields).

User: "I want to update my card."
Bot:  "I'd be happy to help. Click the secure link below to update
       your payment method. Your card details are processed directly
       by our payment provider—they never pass through our servers."
       [UI Component: Stripe Elements iframe]

The card data goes directly from the user's browser to the payment processor. Your backend never sees it. Your LLM never sees it. Your PCI scope stays minimal.

This is the architecture that every fintech team should use. The AI handles the conversation. The payment processor handles the money. They never mix.

Strategy 3: Scoped Tool Access

If your support bot needs to answer billing questions—"Why was I charged $50?" or "When does my subscription renew?"—it needs access to transaction data. But it doesn't need full card numbers.

Design your tools to return only what's needed:

# BAD: Tool returns full card data
def get_payment_method(user_id: str):
    return {
        "pan": "4111111111111111",
        "exp": "12/28",
        "cvv_hash": "abc",
        "billing_address": "123 Main St..."
    }

# GOOD: Tool returns only display-safe data
def get_payment_method(user_id: str):
    return {
        "last4": "1111",
        "brand": "Visa",
        "exp_month": 12,
        "exp_year": 2028
        # No PAN, no CVV, no full address
    }

The LLM can tell the user "Your Visa ending in 1111 expires December 2028" without ever seeing the full card number. The tool's API surface is the security boundary—not the LLM's system prompt.

Beyond PANs: The Full Financial PII Stack

Credit cards aren't the only financial data that matters. Our PII detector covers:

Data TypeRegex PatternPCI Relevant
Credit cards (Luhn-validated)Visa, MC, Amex, Discover formatsYes (PAN)
IBAN numbers2-letter country + 2 check digits + up to 30 alphanumYes (bank account)
SSNXXX-XX-XXXX formatYes (identity verification)
Driver's licenseState-format alphanumericYes (KYC/identity)
US ZIP codes5-digit or 5+4 formatModerate (address component)

All detected on the strict preset, with Luhn validation for credit cards to minimize false positives.

Prompt Injection as a Compliance Breach

Here's something most compliance teams haven't considered: a successful prompt injection against a financial bot is a reportable compliance event.

If an attacker tricks your bot into revealing another customer's transaction history, that's not just "bad AI behavior." It's a data breach under:

  • GLBA (Gramm-Leach-Bliley Act): Financial institutions must protect customer financial data
  • PCI-DSS Requirement 7: Restrict access to cardholder data to need-to-know
  • State breach notification laws: Unauthorized access to financial data triggers mandatory notifications

This is why tenant isolation in RAG is critical for financial applications. Your vector search must strictly filter by user_id or account_id at the query level. Never rely on the LLM to "only show this user's data"—the LLM doesn't understand access control.

The Audit-Ready Checklist

If you're building AI for financial services, run this check:

  • PAN detection is active with Luhn validation on all user inputs
  • Card data redirects to a PCI-compliant payment widget (Stripe, Adyen)
  • Tool APIs return minimal data (last 4, brand, expiry—never full PAN)
  • Tenant isolation is enforced at the database/vector store level
  • Audit logs capture every security decision with timestamps and event IDs
  • Output scanning catches PII leakage in bot responses
  • Zero retention mode is enabled if you don't need to store conversation content

PromptGuard's support_bot:strict preset with zero retention mode enabled gives you most of these controls out of the box. PII is redacted before reaching the LLM, outputs are scanned for data leakage, every decision is logged with a trace ID, and no prompt content is stored in event records.

Conclusion

PCI audits are painful. Don't make them harder by putting a probabilistic, hallucinating robot in the middle of your payment flow.

The architecture is simple: the AI handles the conversation, the payment processor handles the money, and a PII firewall ensures the two never contaminate each other.

Keep the card data out of the chat. Your auditor will thank you.