Discover and explore top open-source AI tools and projects—updated daily.
nashsuA desktop app for building self-updating LLM-powered wikis from documents
New!
Top 53.0% on SourcePulse
LLM Wiki is a cross-platform desktop application designed to transform user documents into an organized, interlinked, and persistent knowledge base using Large Language Models (LLMs). It diverges from traditional Retrieval-Augmented Generation (RAG) by incrementally building and maintaining a wiki, ensuring knowledge is compiled once and kept current rather than re-derived on each query. This offers users a powerful tool for managing and exploring complex information, enhancing research, reading, and personal growth workflows.
How It Works
The project implements a methodology inspired by Andrej Karpathy's LLM Wiki pattern, featuring a three-layer architecture (Raw Sources, Wiki, Schema) and core operations (Ingest, Query, Lint). A key innovation is the Two-Step Chain-of-Thought Ingest: an LLM first analyzes source documents to identify entities, concepts, and connections, then generates wiki pages based on this analysis. This sequential approach improves output quality over single-step methods. The system builds a Knowledge Graph using a 4-signal relevance model (direct link, source overlap, Adamic-Adar, type affinity) and employs Louvain Community Detection to automatically cluster related knowledge, providing insights into surprising connections and potential knowledge gaps.
Quick Start & Requirements
npm install, then npm run tauri dev (development) or npm run tauri build (production).Highlighted Details
Maintenance & Community
The README does not specify notable contributors, sponsorships, or community channels (e.g., Discord, Slack). The project is noted as being based on Andrej Karpathy's foundational methodology.
Licensing & Compatibility
This project is licensed under the GNU General Public License v3.0 (GPL-3.0). As a strong copyleft license, GPL-3.0 requires derivative works to be distributed under the same license, potentially restricting commercial use or integration into closed-source proprietary software.
Limitations & Caveats
The application's functionality is dependent on external LLM APIs, which may incur costs and introduce latency. The effectiveness of knowledge extraction and organization relies on the quality of the chosen LLM. The Deep Research feature requires integration with the Tavily API, subject to its own usage policies and potential costs. Building from source requires specific development environment configurations (Node.js, Rust).
1 day ago
Inactive
stanford-oval