ZenGuard AI enables AI developers to integrate production-level, low-code LLM (Large Language Model) guardrails into their generative AI applications effortlessly. With ZenGuard AI, ensure your application operates within trusted boundaries, is protected from malicious attacks and maintains user privacy without compromising on performance.
- Prompt Injection Detection: Identifies and mitigates attempts to manipulate, exfiltrate proprietary data and insert malicious content to/from models and RAG systems.
- Jailbreak Detection: Identifies and mitigates attempts to manipulate model/app outputs.
- Personally Identifiable Information (PII) Detection: Protects user data privacy by detecting and managing sensitive information.
- Allowed Topics Detection: Enables your model/app to generate content within specified, permissible topics.
- Banned Topics Detection: Prevents the model from producing content on prohibited subjects.
- Keywords Detection: Allows filtering and sanitization of your application's requests and responses or content generation based on specific keywords.
Start by installing ZenGuard package:
pip install zenguard
Jump into our Quickstart Guide to easily integrate ZenGuard AI into your application.
Test the capabilities of ZenGuard AI in our ZenGuard Playground. It's available to start for free to understand how our guardrails can enhance your GenAI applications.
A more detailed documentation is available at docs.zenguard.ai.
You can run pentest against both ZenGuard AI and (optionally) ChatGPT.
Clone this repo and install requirements.
Run pentest against ZenGuard AI:
export ZEN_API_KEY=your-api-key
python tests/pentest.py
Run pentest against both ZenGuard AI and ChatGPT:
export ZEN_API_KEY=your-api-key
export OPENAI_API_KEY=your-openai-api-key
python tests/pentest.py
Note that we always are running the pentest against the most up-to-date model. Currently, gpt-4-0125-preview
Book a Demo or just shoot us an email to [email protected].
Developed with ❤️ by https://zenguard.ai/