Open-source vulnerability scanner for LLMs and agent workflows
Top 27.1% on sourcepulse
Agentic Security is an open-source toolkit for AI red teaming and vulnerability scanning, designed to protect Large Language Models (LLMs) from various attacks. It targets developers, researchers, and security teams seeking to proactively identify and mitigate risks in AI systems, offering robust defenses against jailbreaks, fuzzing, and multimodal threats.
How It Works
The tool employs an agentic approach, simulating sophisticated attack sequences and stress-testing LLMs with diverse inputs. It supports multimodal attacks (text, image, audio), multi-step jailbreaks, comprehensive fuzzing, API integration, and RL-based adaptive attacks. This methodology allows for proactive identification and mitigation of AI system vulnerabilities.
Quick Start & Requirements
pip install agentic_security
agentic_security
or python -m agentic_security
Highlighted Details
Maintenance & Community
The project is actively developed, with a roadmap including RL-powered attacks, massive dataset expansion, daily attack updates, and community modules. Contributions are welcome via pull requests on GitHub.
Licensing & Compatibility
Released under the Apache License v2.0, allowing for commercial use and integration with closed-source projects.
Limitations & Caveats
The project is described as "just getting started," with some features like RL-powered attacks and extensive dataset expansion still under development. Custom integration examples for multimodal probes are provided, but specific requirements for these may vary.
5 days ago
Inactive