Memory-Palace  by AGI-is-going-to-arrive

AI agent long-term memory operating system

Created 1 month ago
266 stars

Top 96.1% on SourcePulse

GitHubView on GitHub
Project Summary

Memory Palace provides AI agents with persistent, searchable, and auditable long-term memory, enabling seamless cross-session continuity. It addresses the pain point of AI agents forgetting context between conversations, allowing them to retain information and improve performance. The project targets developers building AI agents, researchers, and power users seeking to enhance AI agent capabilities with robust memory systems.

How It Works

Memory Palace stores agent memories in a persistent SQLite database, surviving across sessions. It employs a hybrid retrieval system combining keyword, semantic, and reranker models, augmented by intent-aware search that categorizes queries (factual, exploratory, temporal, causal) to apply specialized strategies. A key feature is the auditable write pipeline, which includes a "Write Guard" for pre-checks, snapshotting for full rollback capabilities, and an asynchronous index worker. The "Model Context Protocol" (MCP) offers a unified interface for integrating various AI clients and IDE hosts.

Quick Start & Requirements

  • Primary Install/Run: Offers three main options: prebuilt Docker images, manual local setup, or one-click Docker deployment.
  • Prerequisites: Python 3.10+ (3.11+ recommended), Node.js 20.19+ (or >=22.12), npm 9+, and Docker (optional).
  • Links: SKILLS_QUICKSTART_EN.md (CLI), IDE_HOSTS_EN.md (IDE hosts), docs/ for full documentation.

Highlighted Details

  • Auditable Write Pipeline: Ensures every memory write is logged, snapshot, and auditable, with rollback features via the Review dashboard.
  • Unified Retrieval Engine: Supports keyword, semantic, and hybrid retrieval with graceful fallback mechanisms.
  • Intent-Aware Search: Classifies query intent to optimize retrieval strategies.
  • Memory Governance: Manages memory vitality, decay, cleanup, and consolidation for efficient memory management.
  • Multi-Client MCP Integration: Provides a single protocol for diverse AI clients (Codex, Gemini CLI, Claude Code, OpenCode) and IDEs.
  • Flexible Deployment: Four profiles (A-D) cater to different resource constraints, from minimal local setups to production environments.
  • Observability Dashboard: A React-based dashboard offers views for memory browsing, review/rollback, maintenance, and real-time monitoring.
  • Benchmarking: Detailed retrieval quality metrics are provided for different profiles and datasets, demonstrating performance.

Maintenance & Community

The repository is maintained under the "AGI-is-going-to-arrive" organization. No specific community channels (like Discord/Slack) or notable contributors are detailed in the provided README.

Licensing & Compatibility

The project is released under the MIT License, permitting commercial use and integration into closed-source projects.

Limitations & Caveats

Native Windows host runs may have explicit target-environment caveats. Profiles C and D require manual configuration of external model endpoints (embedding, reranker, LLM). Certain Docker configurations, particularly involving network filesystems, require careful handling of Write-Ahead Logging (WAL) settings to avoid data corruption. Some advanced features, like LLM-powered Write Guard, are noted as experimental.

Health Check
Last Commit

2 weeks ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
107 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.