icarus-daedalus  by esaradev

Universal agent memory protocol for seamless AI collaboration

Created 1 month ago
272 stars

Top 94.7% on SourcePulse

GitHubView on GitHub
Project Summary

Summary

Icarus Memory Protocol provides a universal, database-less shared memory system for AI agents, enabling seamless collaboration and knowledge sharing across any framework or platform. It addresses the challenge of inter-agent communication and persistent memory by storing agent interactions as markdown files in a central directory (~/fabric/). This allows agents to write, read, and search shared context, facilitating complex workflows and enabling self-training capabilities for improved performance and cost efficiency.

How It Works

The core mechanism relies on simple bash scripts (fabric-adapter.sh) to manage memory stored in markdown files within ~/fabric/. Each entry includes YAML frontmatter detailing agent, timestamp, type, and references. Memory is tiered by age: 'hot' (<24h), 'warm' (1-7 days), and 'cold' (>7 days). A curator.py daemon manages re-tiering, compaction (using Claude), and indexing. This approach offers a lightweight, highly accessible, and framework-agnostic solution for persistent agent memory.

Quick Start & Requirements

  • Primary Install: Clone the repository and run bash setup.sh. For Hermes agents, copy the plugins/icarus/ directory and associated scripts to the agent's plugin folder.
  • Prerequisites: Bash, Python, Git. For Hermes integration: a Hermes agent. For self-training: a Together AI API key. Node.js is required for the Claude Code CLI.
  • Links:
    • Repository: https://github.com/esaradev/icarus-daedalus
    • Protocol Spec: PROTOCOL.md
    • Schema: SCHEMA.md
    • Demo: examples/hermes-demo/

Highlighted Details

  • Hermes Plugin: Extends Hermes agents with 7 tools (e.g., fabric_write, fabric_recall, fabric_search) and 4 automatic hooks for context injection, memory retrieval, decision capture, and session summarization.
  • Cross-Platform Demo: examples/hermes-demo/ showcases two agents (Slack/Telegram) collaborating and recalling each other's work across platforms.
  • Self-Training Pipeline: Automates the process of exporting agent work history into fine-tuning datasets (OpenAI, HuggingFace formats) and initiating model fine-tuning via Together AI.
  • Claude Code Integration: Includes hooks (on-stop, on-start) for automatic memory persistence and context loading within Claude Code environments.
  • Git-based Sync: fabric-sync.sh enables cross-machine synchronization of the ~/fabric/ directory using Git.

Maintenance & Community

No specific details regarding maintainers, community channels (e.g., Discord, Slack), or project roadmap were found in the provided README content.

Licensing & Compatibility

The provided README content does not specify a software license. This lack of explicit licensing information presents a significant barrier to assessing compatibility for commercial use or closed-source integration.

Limitations & Caveats

Self-training requires a Together AI API key and incurs associated costs; model availability for fine-tuning can vary. The README does not specify the project's development stage (e.g., alpha, beta) or provide explicit license details, making adoption decisions challenging without further clarification.

Health Check
Last Commit

1 week ago

Responsiveness

Inactive

Pull Requests (30d)
2
Issues (30d)
0
Star History
147 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.