Discover and explore top open-source AI tools and projects—updated daily.
MichaellivAgent knowledge system with progressive disclosure
New!
Top 83.4% on SourcePulse
Summary Michaelliv/napkin is a local-first, file-based knowledge system engineered for AI agents. It tackles the challenge of managing and progressively disclosing information efficiently, offering a structured method to build agent memory without overwhelming context windows. This system benefits developers by providing a robust, searchable, and incrementally accessible knowledge base for sophisticated AI applications.
How It Works
Napkin employs a local-first, file-based architecture, storing knowledge within a project's directory. Its core innovation is "progressive disclosure," revealing information to agents in tiered levels: a context note (Level 0), an overview with keywords (Level 1), search snippets (Level 2), and full file content (Level 3). Retrieval uses efficient BM25 search on markdown files, bypassing embeddings or graphs for speed and simplicity. It offers both a CLI and a TypeScript SDK (napkin-ai).
Quick Start & Requirements
Install the CLI via npm install -g napkin-ai. Import the SDK as import { Napkin } from "napkin-ai";. Initialize vaults with napkin init --template <template_name>. Development requires bun or npm. No specific runtime hardware/OS prerequisites are detailed. Benchmark details are in bench/README.md.
Highlighted Details
napkin-context injects vault overviews into agent prompts; napkin-distill performs background knowledge distillation.Maintenance & Community The README does not detail specific contributors, sponsorships, or community channels like Discord or Slack.
Licensing & Compatibility Licensed under the permissive MIT license, allowing broad compatibility for commercial use and integration into closed-source projects.
Limitations & Caveats The system is heavily optimized for agentic workflows and progressive disclosure, representing a specialized use case. The README lacks details on community support channels or explicit ongoing maintenance/roadmap beyond core functionality. Benchmark results are framed around an upcoming conference (ICLR 2025).
1 day ago
Inactive
1st1