MemoMind  by 24kchengYe

AI agent memory system for persistent, private recall

Created 3 weeks ago

New!

441 stars

Top 67.7% on SourcePulse

GitHubView on GitHub
Project Summary

Summary

MemoMind addresses the critical issue of AI agent amnesia by providing a fully local, GPU-accelerated memory system. It targets developers using AI coding agents, enabling them to build persistent, evolving "brains" for their digital twins. The core benefit is preventing context loss and leveraging accumulated knowledge across sessions for more efficient and personalized AI interactions.

How It Works

MemoMind constructs a persistent, local knowledge graph using PostgreSQL and pgvector. It leverages LLMs for automatic fact extraction from conversations and employs a novel 4-way hybrid retrieval system combining semantic search, BM25, graph traversal, and temporal analysis. A key differentiator is the "reflect" capability, allowing the AI to synthesize insights across its entire memory, moving beyond simple recall. This approach offers significant advantages over basic file-based memory systems by providing structured, dynamic, and context-aware recall.

Quick Start & Requirements

Installation involves cloning the repository, running an installer script within WSL2 Ubuntu, configuring an LLM API key in serve.py, and starting the MemoMind service. Prerequisites include Windows 10/11 with WSL2 and Ubuntu, an NVIDIA GPU (recommended for performance), and an LLM API key (e.g., from MindCraft or OpenRouter). Integration with Claude Code is facilitated via MCP. The dashboard is accessible at http://127.0.0.1:9999.

Highlighted Details

  • 100% Local & GPU-Accelerated: Utilizes local PostgreSQL and NVIDIA GPUs for embeddings and reranking, ensuring data privacy and speed.
  • Advanced Retrieval: Implements a 4-way hybrid search (semantic, BM25, graph, temporal) for highly relevant memory recall.
  • Reasoning & Synthesis: Features a "reflect" capability for deep analysis and synthesis across all stored memories.
  • Broad LLM Support: Integrates with numerous LLM providers (OpenAI, Anthropic, Gemini, Groq, Ollama, LM Studio, etc.) via OpenAI-compatible APIs.
  • Data Portability: Offers JSON export for memories, enabling migration to future AI systems without vendor lock-in.
  • Automated Import: Seamlessly imports ChatGPT/Gemini conversation history and DayLife activity logs.

Maintenance & Community

The project is actively maintained, as evidenced by its recent changelog (v1.5 as of March 27, 2026). Key providers and LLM services like MindCraft and OpenRouter are acknowledged. Community interaction channels like Discord or Slack are not explicitly mentioned in the README.

Licensing & Compatibility

MemoMind is released under the permissive MIT License, allowing for broad use, modification, and distribution, including within commercial and closed-source applications.

Limitations & Caveats

The system requires a Windows environment with WSL2 and Ubuntu. While an NVIDIA GPU is optional, its absence significantly impacts embedding performance. Advanced features like multi-agent memory sharing and conflict resolution are still on the roadmap, indicating ongoing development rather than a fully mature, feature-complete state for all potential use cases.

Health Check
Last Commit

7 hours ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
615 stars in the last 24 days

Explore Similar Projects

Feedback? Help us improve.