Discover and explore top open-source AI tools and projects—updated daily.
esaradevUniversal agent memory protocol for seamless AI collaboration
Top 94.7% on SourcePulse
Summary
Icarus Memory Protocol provides a universal, database-less shared memory system for AI agents, enabling seamless collaboration and knowledge sharing across any framework or platform. It addresses the challenge of inter-agent communication and persistent memory by storing agent interactions as markdown files in a central directory (~/fabric/). This allows agents to write, read, and search shared context, facilitating complex workflows and enabling self-training capabilities for improved performance and cost efficiency.
How It Works
The core mechanism relies on simple bash scripts (fabric-adapter.sh) to manage memory stored in markdown files within ~/fabric/. Each entry includes YAML frontmatter detailing agent, timestamp, type, and references. Memory is tiered by age: 'hot' (<24h), 'warm' (1-7 days), and 'cold' (>7 days). A curator.py daemon manages re-tiering, compaction (using Claude), and indexing. This approach offers a lightweight, highly accessible, and framework-agnostic solution for persistent agent memory.
Quick Start & Requirements
bash setup.sh. For Hermes agents, copy the plugins/icarus/ directory and associated scripts to the agent's plugin folder.https://github.com/esaradev/icarus-daedalusPROTOCOL.mdSCHEMA.mdexamples/hermes-demo/Highlighted Details
fabric_write, fabric_recall, fabric_search) and 4 automatic hooks for context injection, memory retrieval, decision capture, and session summarization.examples/hermes-demo/ showcases two agents (Slack/Telegram) collaborating and recalling each other's work across platforms.on-stop, on-start) for automatic memory persistence and context loading within Claude Code environments.fabric-sync.sh enables cross-machine synchronization of the ~/fabric/ directory using Git.Maintenance & Community
No specific details regarding maintainers, community channels (e.g., Discord, Slack), or project roadmap were found in the provided README content.
Licensing & Compatibility
The provided README content does not specify a software license. This lack of explicit licensing information presents a significant barrier to assessing compatibility for commercial use or closed-source integration.
Limitations & Caveats
Self-training requires a Together AI API key and incurs associated costs; model availability for fine-tuning can vary. The README does not specify the project's development stage (e.g., alpha, beta) or provide explicit license details, making adoption decisions challenging without further clarification.
1 week ago
Inactive
THUDM