Your RAG system retrieves documents from various sources. PromptGuard ensures retrieved content doesn't contain hidden malicious instructions.
Scan retrieved documents for hidden instructions, invisible text, and other indirect prompt injection techniques.
Validate document sources and flag content from untrusted or potentially compromised sources.
Clean retrieved content to remove potential threats while preserving useful information.
Protect your vector database from poisoning attacks that could inject malicious content.
Scan user queries for injection attempts before they reach your retrieval system.
Ensure LLM responses based on retrieved content don't leak sensitive information.
User query comes in. PromptGuard scans for injection attempts before retrieval.
Documents are retrieved from your vector DB. Each document is scanned for hidden threats.
Clean, validated context is passed to the LLM. Response is validated before returning.
from promptguard import PromptGuard
from langchain.retrievers import VectorStoreRetriever
pg = PromptGuard(api_key="your-api-key")
# Retrieve documents
documents = retriever.get_relevant_documents(query)
# Scan each document for hidden threats
safe_documents = []
for doc in documents:
scan_result = pg.scrape.scan(
content=doc.page_content,
content_type="text/plain"
)
if scan_result.is_safe:
safe_documents.append(doc)
else:
print(f"⚠️ Blocked document: {scan_result.threats}")
# Use only safe documents for generation
response = llm.generate(context=safe_documents, query=query)Stop indirect prompt injection attacks. Protect your knowledge base and your users.