reverser_ai  by mrphrazer

Binary Ninja plugin for AI-assisted reverse engineering using local LLMs

created 1 year ago
969 stars

Top 38.8% on sourcepulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

ReverserAI is a research project providing automated reverse engineering assistance using local LLMs on consumer hardware. It targets reverse engineers seeking to enhance their workflow with AI-driven function naming, offering an offline, privacy-preserving solution that integrates with Binary Ninja and is extensible to other platforms.

How It Works

ReverserAI integrates with Binary Ninja to extract decompiler output, which is then processed by local LLMs. It combines static analysis techniques with LLM capabilities to provide context-aware function name suggestions. This approach aims to balance performance with data privacy by avoiding cloud-based services, leveraging consumer hardware, and optimizing LLM interactions.

Quick Start & Requirements

  • Install via Binary Ninja's plugin manager or pip3 install . after cloning.
  • Requires Python 3.x, Binary Ninja, and a model download (~5GB).
  • Recommended: 16GB RAM, 12 CPU threads. GPU acceleration (especially Apple silicon) significantly improves performance (2-5s vs 20-30s per query).
  • Model download: python3 model_download.py or automatic on first launch.
  • Configuration: Adjust n_threads, n_gpu_layers, model_identifier (e.g., mistral-7b-instruct, mixtral-8x7b-instruct) via Binary Ninja settings.
  • Docs: https://github.com/mrphrazer/reverser_ai

Highlighted Details

  • Offline operation ensures data privacy.
  • Automatic, semantically meaningful function naming from decompiler output.
  • Modular architecture designed for extension to IDA and Ghidra.
  • Optimized for consumer hardware, including Apple silicon.
  • Enhances AI suggestions with static analysis context.

Maintenance & Community

  • Author: Tim Blazytko.
  • Open to contributions and feedback. Contact: @mr_phrazer.

Licensing & Compatibility

  • License: Not explicitly stated in the README.
  • Compatibility: Designed for Binary Ninja; extensible to IDA and Ghidra.

Limitations & Caveats

Local LLMs have performance limitations compared to cloud-based models. The project is a research proof-of-concept, primarily focused on function naming, with potential for future expansion. Model selection significantly impacts resource consumption and output quality.

Health Check
Last commit

1 year ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
37 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.