LLM agentic framework for security researchers/pen-testers
Top 49.2% on sourcepulse
This project provides a framework for ethical hackers and security researchers to leverage Large Language Models (LLMs) for penetration testing and vulnerability discovery, aiming to automate and accelerate security assessments. It targets security professionals seeking to integrate AI into their workflows, offering a concise, 50-line-of-code approach to building LLM-powered security agents.
How It Works
HackingBuddyGPT utilizes an agent-based architecture where LLMs are prompted to generate commands for execution on target systems. It supports various use cases, including Linux privilege escalation and web penetration testing, by abstracting LLM interactions, command execution, and logging. The framework emphasizes modularity, allowing users to easily define new agents and integrate different LLMs or execution environments.
Quick Start & Requirements
pip install -e .
after cloning the repository..env
file with API keys and target credentials.python src/hackingBuddyGPT/cli/wintermute.py <UseCaseName>
(e.g., LinuxPrivesc
).Highlighted Details
Maintenance & Community
The project is led by contributors from TU Wien's IPA-Lab, with active participation from academics and professional pen-testers. A Discord server is available for community discussion.
Licensing & Compatibility
The project is released under a permissive license, suitable for commercial use and integration into closed-source projects.
Limitations & Caveats
Web testing use cases are in pre-alpha and under heavy development. Usage of OpenAI models incurs costs, and users are responsible for managing API usage and associated expenses. The project is experimental and provided "as-is."
3 weeks ago
1 day