Fuzzing framework for LLM API integrations
Top 89.1% on sourcepulse
LLMFuzzer is an open-source fuzzing framework designed for testing Large Language Models (LLMs), particularly their API integrations within applications. It targets security enthusiasts, pentesters, and cybersecurity researchers aiming to discover and exploit vulnerabilities in AI systems, streamlining the testing process.
How It Works
LLMFuzzer employs a modular architecture to support various fuzzing strategies for LLMs. It focuses on testing LLM API integrations by sending crafted inputs to an LLM endpoint and analyzing the responses. The framework is designed for extensibility, allowing users to integrate new attack vectors and comparison methods.
Quick Start & Requirements
pip install -r requirements.txt
llmfuzzer.cfg
with LLM API endpoint details (URL, content type, query/output attributes, headers, cookies).Highlighted Details
Maintenance & Community
The project is marked as "Unmaintained" but welcomes forks and continued development.
Licensing & Compatibility
Licensed under the MIT License, permitting commercial use and integration with closed-source applications.
Limitations & Caveats
The project is explicitly marked as unmaintained, meaning there is no active development or support. Full documentation is still in progress.
1 year ago
1 week