Discover and explore top open-source AI tools and projects—updated daily.
sigwlAI assistant for accelerating C++ game reverse engineering in IDA Pro
Top 96.3% on SourcePulse
Summary
AiDA is a high-performance C++ IDA Pro plugin designed to accelerate reverse engineering of modern C++ games. It integrates directly with large language models (Google Gemini, OpenAI, Anthropic) to provide advanced analysis, renaming, and code generation capabilities, targeting reverse engineers seeking efficiency.
How It Works
This native C++ plugin bypasses Python dependencies for maximum speed and stability within IDA Pro 9.0+. It employs a hybrid approach combining static pattern scanning (GSpots) with AI analysis to locate critical Unreal Engine globals. Core functionalities include in-depth function analysis, automatic descriptive renaming, C++ struct reconstruction, and MinHook snippet generation, all powered by user-selected LLMs.
Quick Start & Requirements
Installation involves downloading the release ZIP, extracting AiDA.dll, and placing it in the IDA Pro plugins directory. Prerequisites include Microsoft Visual C++ Redistributables and OpenSSL, with specific instructions for OpenSSL DLL placement on Windows. Configuration is done via IDA's AI Assistant settings, requiring an API key for the chosen provider (Gemini, OpenAI, Anthropic). GitHub Copilot integration necessitates running a separate copilot-api proxy server. Building from source requires CMake (3.27+), Python 3, a C++17 compiler, and OpenSSL.
Highlighted Details
Maintenance & Community
The project is currently in BETA, with potential for bugs and instability. Users are encouraged to report issues via GitHub or join the dedicated Discord server for support and discussions: https://discord.gg/JMRkEThbUU.
Licensing & Compatibility
Licensed under the permissive MIT License, allowing for broad use, including commercial applications and integration with closed-source projects.
Limitations & Caveats
As a BETA release, AiDA may exhibit instability or bugs. GitHub Copilot integration requires a separate, continuously running proxy server. The effectiveness of AI analysis is directly tied to the chosen LLM provider, model, and prompt configuration settings.
4 weeks ago
Inactive