ToolNeuron  by Siddhesh2377

Complete offline AI ecosystem for Android

Created 9 months ago
272 stars

Top 94.8% on SourcePulse

GitHubView on GitHub
Project Summary

ToolNeuron is an advanced, privacy-first AI ecosystem designed for Android, offering complete on-device processing for text generation, image creation, text-to-speech, and document intelligence via RAG. It targets users who prioritize digital sovereignty and offline AI capabilities, eliminating subscriptions and data harvesting by keeping all processing and data local, secured with enterprise-grade encryption.

How It Works

ToolNeuron leverages a Kotlin/Jetpack Compose UI with C++/JNI for core inference engines, including llama.cpp for GGUF models and LocalDream for Stable Diffusion 1.5. Its architecture emphasizes offline-first operation, featuring a sophisticated RAG system with hybrid search for document understanding, an encrypted "Memory Vault" utilizing AES-256-GCM and Write-Ahead Logging for secure, crash-recoverable storage, and a persistent AI memory system inspired by Mem0. This approach ensures robust functionality and user privacy without cloud dependencies.

Quick Start & Requirements

  • Installation: Download the APK from the Google Play Store or GitHub Releases.
  • Prerequisites: Android 12+ (API 31) minimum, 6GB RAM, 4GB storage. Recommended: Android 13+, 8GB RAM (12GB preferred), 10GB storage, Snapdragon 8 Gen 1 or equivalent, and hardware-backed encryption support.
  • Resource Footprint: Varies by model; a single 7B model requires ~4GB, with 10GB+ recommended for full features.
  • Links: Google Play Store, GitHub Releases, Discord.

Highlighted Details

  • Comprehensive AI Models: Supports any GGUF text generation model (Llama, Mistral, Gemma, etc.) and Stable Diffusion 1.5 for image generation, including inpainting.
  • Advanced RAG System: Enables semantic search and querying across documents (PDF, Word, Excel, EPUB) using hybrid retrieval (BM25, vector, RRF, MMR).
  • Privacy & Security: Features zero data collection, hardware-backed AES-256-GCM encryption for storage and RAG, and offline-first operation.
  • TavernAI v2 Compatibility: Full support for AI character cards, including import/export, persona reinforcement, and template variables.
  • On-Device TTS: Offers 10 voices across 5 languages with adjustable speed and quality, processed locally.

Maintenance & Community

The project is actively developed by Siddhesh Sonar. Community support is available via a Discord server and GitHub Issues for bug reports and feature requests.

Licensing & Compatibility

Licensed under the Apache License 2.0, permitting commercial use, modification, and distribution with standard disclaimers. It is compatible with closed-source applications.

Limitations & Caveats

Running advanced AI models, especially image generation, is computationally intensive and requires significant device resources (RAM, processing power), potentially leading to substantial battery drain during active use. Very large models (e.g., 70B+) are impractical on current mobile hardware due to RAM limitations. Some features like Speech-to-Text and multi-modal support are still under development.

Health Check
Last Commit

4 days ago

Responsiveness

Inactive

Pull Requests (30d)
3
Issues (30d)
16
Star History
64 stars in the last 30 days

Explore Similar Projects

Starred by Eric Zhu Eric Zhu(Coauthor of AutoGen; Research Scientist at Microsoft Research), Elvis Saravia Elvis Saravia(Founder of DAIR.AI), and
15 more.

semantic-kernel by microsoft

0.2%
27k
SDK for building intelligent AI agents and multi-agent systems
Created 3 years ago
Updated 23 hours ago
Feedback? Help us improve.