Local language model for reverse engineering with radare2
Top 91.7% on sourcepulse
This project provides a local, conversational AI assistant for reverse engineering tasks, primarily integrated with the radare2 framework. It empowers reverse engineers and security researchers by enabling natural language queries about code, binaries, and general reverse engineering concepts, potentially reducing reliance on external services and improving workflow efficiency.
How It Works
r2ai leverages local or remote Large Language Models (LLMs) to process natural language prompts. It integrates with radare2 via a native C plugin or a JavaScript plugin (decai
), allowing users to query the AI directly from the radare2 shell. The system supports various LLM backends (Ollama, OpenAI, Anthropic, etc.) and can index large codebases or documentation using vector databases for context-aware responses.
Quick Start & Requirements
r2pm
(e.g., r2pm -s r2ai
). Alternatively, build from source using make
in subdirectories (src/
, py/
, decai/
, server/
).py/
components), potentially CUDA-enabled GPU for local LLMs. API keys for remote services are stored in ~/.r2ai.<provider>-key
.r2 -qc r2ai-r
or via r2pm -r r2ai
.Highlighted Details
r2ai-plugin
) and a JS plugin (decai
) focused on decompilation.Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project notes that models used by r2ai may provide unreliable information, and there's an ongoing effort to improve post-finetuning. Some components are marked as deprecated (e.g., r2ai-python
CLI).
4 days ago
1 day