Protocol fuzzer guided by LLMs (NDSS'24 paper)
Top 80.9% on sourcepulse
ChatAFL is a protocol fuzzer that leverages Large Language Models (LLMs) to enhance fuzzing efficiency and effectiveness for network protocols. It targets security researchers and developers seeking to improve protocol robustness by automating grammar extraction, seed enrichment, and coverage-guided mutation strategies.
How It Works
ChatAFL integrates LLMs into the fuzzing process to address key challenges. It uses LLMs to generate machine-readable protocol grammars for structure-aware mutation, enriching initial seed queues with diverse messages, and generating new inputs to overcome coverage plateaus. This approach aims to achieve higher code and state coverage more rapidly than traditional fuzzing methods.
Quick Start & Requirements
./deps.sh
to install dependencies.KEY=<OPENAI_API_KEY> ./setup.sh
to prepare Docker images (approx. 40 minutes). Requires an OpenAI API key../run.sh <container_number> <fuzzed_time> <subjects> <fuzzers>
for experiments../analyze.sh <subjects> <fuzzed_time>
to analyze results.Highlighted Details
Maintenance & Community
The project is associated with NDSS'24. No specific community channels or active maintenance signals are detailed in the README.
Licensing & Compatibility
Limitations & Caveats
The fuzzer relies on OpenAI's LLMs (gpt-3.5-turbo-instruct, gpt-3.5-turbo), imposing third-party rate limits (150,000 tokens/minute). A GPT-4 version is available but less tested. Reproducing full paper experiments requires significant computational resources.
1 month ago
1+ week