Discover and explore top open-source AI tools and projects—updated daily.
ScottRBKOpen-source memory infrastructure for AI agents
Top 98.3% on SourcePulse
Forgetful is an open-source Model Context Protocol (MCP) server designed to provide persistent storage and retrieval for AI agents. It addresses the critical need for shared knowledge bases, enabling agents to access past information and maintain context across interactions, thereby enhancing their effectiveness, particularly in complex workflows like software development. It targets developers and researchers building or integrating AI agents that require robust memory capabilities.
How It Works
Forgetful adopts an opinionated Zettelkasten-inspired approach, enforcing atomic memories (one concept per note) with title, content, context, keywords, and tags. It generates semantic embeddings for natural language retrieval and automatically links semantically similar memories, facilitating the construction of a knowledge graph. The system also manages structured data like entities (people, organizations), projects, documents, code artifacts, skills, and plans for multi-agent coordination, aiming to improve recall accuracy.
Quick Start & Requirements
uvx forgetful-ai or uv tool install forgetful-ai). Source installation and Docker deployments (SQLite or PostgreSQL) are also supported.stdio; HTTP transport is available (--transport http --port 8020).uvx or uv for execution. Docker requires Docker Compose.~/.local/share/forgetful).Highlighted Details
STDIO and HTTP transport mechanisms.Maintenance & Community
The project welcomes contributions, offering detailed guides for testing, development setup, and CI/CD. A roadmap outlines planned features. Specific community channels (e.g., Discord, Slack) are not explicitly mentioned.
Licensing & Compatibility
Released under the MIT License, permitting commercial use and closed-source linking.
Limitations & Caveats
The mandatory atomic memory structure may require agents to decompose complex information. Token budget management prioritizes results by importance and recency, potentially truncating less critical context to prevent LLM context window overflow.
2 weeks ago
Inactive