Discover and explore top open-source AI tools and projects—updated daily.
NVIDIALLM vulnerability scanner for red-teaming and security assessments
Top 7.5% on SourcePulse
Garak is an open-source LLM vulnerability scanner designed for security researchers and developers to identify weaknesses in generative AI models. It automates the process of red-teaming LLMs by probing for issues like hallucination, data leakage, prompt injection, misinformation, toxicity, and jailbreaks, offering a structured approach similar to network security tools like nmap.
How It Works
Garak employs a modular architecture, combining static, dynamic, and adaptive probes to systematically explore LLM vulnerabilities. It supports a wide range of LLM interfaces, including Hugging Face Hub, Replicate, OpenAI API, LiteLLM, and local GGUF models, allowing users to target diverse models. The tool orchestrates probes and detectors, analyzes outputs, and logs detailed results for each interaction.
Quick Start & Requirements
python -m pip install -U garakpython -m pip install -U git+https://github.com/NVIDIA/garak.git@mainconda create --name garak "python>=3.10,<=3.12", conda activate garak, cd garak, python -m pip install -e .OPENAI_API_KEY).Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The atkgen probe is currently a prototype and primarily supports targets that yield detectable toxicity. Some probes may require specific configurations or API keys for operation.
3 days ago
1 day