Discover and explore top open-source AI tools and projects—updated daily.
In3tinctLLM-powered tool for Android app deobfuscation and vulnerability analysis
Top 94.7% on SourcePulse
Androidmeda is an LLM-powered tool designed to deobfuscate Android application code and identify potential vulnerabilities. It targets developers and security researchers seeking to understand the logic of obfuscated Android apps, including potential malware. The tool offers a significant benefit by simplifying complex code, suggesting clearer names for program elements, and highlighting security issues, thereby accelerating analysis and improving code comprehension.
How It Works
Androidmeda leverages Large Language Models (LLMs) to analyze decompiled Android source code. It identifies obfuscated patterns, suggests more readable names for variables, methods, and classes, and adds comments to clarify application logic. The approach embraces the inherent unpredictability of LLMs while aiming for improved code readability. It supports both cloud-based LLM APIs (OpenAI, Gemini, Anthropic) and local LLM inference via Ollama, providing flexibility in analysis environments and data privacy.
Quick Start & Requirements
pip3 install -r requirements.txt.https://github.com/skylot/jadx) to decompile APKs into Java source files, which are then used as input.python3 androidmeda.py --llm_provider <provider> --llm_model <model> -output_dir <path> -source_dir <path> (e.g., google, gemini-1.5-flash, openai, gpt-4.1). Requires API key environment variable.python3 androidmeda.py --llm_provider ollama --llm_model <model> -output_dir <path> -source_dir <path> (e.g., llama3.2). Requires Ollama setup (https://github.com/ollama/ollama).Highlighted Details
Maintenance & Community
Contributions are welcomed, with instructions provided in CONTRIBUTING.md. No specific community channels (like Discord/Slack) or roadmap links are detailed in the README.
Licensing & Compatibility
The project is licensed under the Apache 2.0 license, which is generally permissive for commercial use and integration into closed-source projects.
Limitations & Caveats
This is an experimental project, and the owner disclaims liability for its use. Users must provide specific decompiled code directories as input, avoiding entire packages. Significant RAM is required for local LLM execution, and LLM output should be reviewed with an understanding of their inherent unpredictability.
1 month ago
Inactive
enaqx