Web fuzzing tool using local LLMs for optimized discovery
Top 86.7% on sourcepulse
This tool enhances web fuzzing by integrating local Large Language Models (LLMs) with ffuf
for intelligent directory and file discovery. It's designed for security researchers and penetration testers seeking to uncover hidden endpoints and files more efficiently than traditional fuzzing methods.
How It Works
Brainstorm leverages LLMs via Ollama to analyze a target website's structure and generate contextually relevant suggestions for potential paths and filenames. It iteratively extracts initial links, uses the LLM to predict new paths, fuzzes these suggestions with ffuf
, and refines its approach based on discovered content. This AI-driven approach aims to optimize the fuzzing process by focusing on more probable targets.
Quick Start & Requirements
pip install -r requirements.txt
), ensure ffuf
is in your PATH, and have Ollama running locally.ffuf
, Ollama, and a downloaded Ollama model (e.g., qwen2.5-coder:latest
).python fuzzer.py "ffuf -w ./fuzz.txt -u http://example.com/FUZZ"
or python fuzzer_shortname.py "ffuf -w ./fuzz.txt -u http://example.com/FUZZ" "BENCHM~1.PY"
.Highlighted Details
ffuf
for optimized web fuzzing.fuzzer_shortname.py
) for discovering legacy 8.3 filenames.benchmark.py
) is available to compare different LLM models.Maintenance & Community
The project is maintained by Invicti Security. Further community engagement details are not specified in the README.
Licensing & Compatibility
The project is released under the MIT License, permitting commercial use and integration with closed-source projects.
Limitations & Caveats
The effectiveness of the LLM-driven suggestions is dependent on the chosen model and its training data. The tool requires local setup of Ollama and specific LLM models, which can have significant resource requirements.
8 months ago
Inactive