Discover and explore top open-source AI tools and projects—updated daily.
Personal memory agent MCP server
New!
Top 78.8% on SourcePulse
This project provides a Model Context Protocol (MCP) server for the driaforall/mem-agent
, enabling users to connect their personal memory systems to applications like Claude Desktop and LM Studio. It targets developers and power users seeking to integrate LLMs with structured personal knowledge bases, offering enhanced contextual assistance by leveraging local, private data.
How It Works
The system functions as an MCP server, interfacing with a specialized LLM fine-tuned for memory management. It utilizes local deployment via vLLM or MLX for enhanced privacy and performance. Data is organized in an Obsidian-style Markdown format, featuring a user.md
file and an entities/
directory with wikilink navigation for relationships. Various connectors import data from sources like ChatGPT, Notion, GitHub, and Google Docs, transforming it into this structured format, which the MCP server then makes accessible to connected applications.
Quick Start & Requirements
make
commands. Key steps include make setup
to configure the memory directory and make run-agent
to start the agent, allowing selection of model precision (e.g., 4-bit for usability).make memory-wizard
or python memory_wizard.py
for guided setup. Memory connectors can be managed via make connect-memory
or python memory_connectors/memory_connect.py
.Highlighted Details
Maintenance & Community
The project encourages community contributions, particularly for new connectors and improvements, but does not list specific contributors, sponsorships, or community channels (like Discord/Slack) in the provided documentation.
Licensing & Compatibility
The license type is not explicitly stated in the provided README. The system is designed for local deployment, emphasizing privacy and compatibility with applications that support the MCP protocol.
Limitations & Caveats
The system primarily targets macOS and Linux environments with GPU support. Setup requires careful configuration of memory directories and application integrations. The README does not specify an alpha or beta status, but the focus on local LLM deployment implies potential resource requirements and a need for technical proficiency.
3 days ago
Inactive