MemoryBear  by SuanmoSuanyangTechnology

AI memory system for cognitive evolution and dynamic knowledge processing

Created 5 months ago
856 stars

Top 41.6% on SourcePulse

GitHubView on GitHub
Project Summary

Equips AI with human-like, dynamic memory capabilities, moving beyond static knowledge storage to enable deep understanding, autonomous evolution, and cognitive collaboration. It targets AI developers and researchers seeking advanced memory management for more intelligent and adaptive AI systems.

How It Works

MemoryBear simulates biological cognitive mechanisms, employing a closed-loop system for knowledge intake, refinement, association, and forgetting. Key components include a Memory Extraction Engine for semantic parsing and structured data generation, Neo4j for graph-based knowledge storage mirroring neuron-synapse models, and a Hybrid Search combining keyword and semantic vector retrieval for precision. A novel Memory Forgetting Engine dynamically decays knowledge based on strength and timeliness, while a Self-Reflection Engine periodically optimizes stored memories for autonomous evolution. This approach treats knowledge as dynamic and evolving, shifting from passive retrieval to proactive cognitive assistance.

Quick Start & Requirements

  • Prerequisites: Node.js (20.19+ or 22.12+), Python 3.12, PostgreSQL 13+, Neo4j 4.4+, Redis 6.0+. Docker is recommended for setting up database services.
  • Installation: Clone the repository. Install Python dependencies (uv sync) and backend services (PostgreSQL, Neo4j, Redis via Docker). Configure environment variables in .env, migrate the PostgreSQL database (alembic upgrade head), and start the backend API (uv run -m app.main). Install Node.js dependencies (npm install) for the frontend, update the proxy configuration in vite.config.ts, and start the frontend service (npm run dev).
  • Initialization: Run curl.exe -X POST http://127.0.0.1:8000/api/setup to initialize the database and obtain super administrator credentials.
  • Docs: API documentation is available at http://localhost:8000/docs.

Highlighted Details

  • Achieves state-of-the-art performance across various reasoning tasks, outperforming industry competitors like Mem O, Zep, and LangMem.
  • Hybrid search yields 92% accuracy, a 35% improvement over single-mode retrieval.
  • The graph-based architecture pushes overall accuracy to 75.00 ± 0.20% while maintaining efficient latency.
  • The forgetting mechanism keeps redundant knowledge below 8%, reducing waste by over 60%.
  • FastAPI services offer average latency below 50 ms and sustain 1000 QPS concurrency.

Maintenance & Community

Community engagement is fostered through GitHub Issues, Pull Requests, and Discussions. A WeChat community group is available, and collaboration inquiries can be directed to tianyou_hubm@redbearai.com.

Licensing & Compatibility

Licensed under the Apache License 2.0, permitting commercial use and integration. The architecture is compatible with enterprise microservice ecosystems and supports Docker-based deployment.

Limitations & Caveats

The provided README does not explicitly detail any limitations, alpha status, or known bugs.

Health Check
Last Commit

4 hours ago

Responsiveness

Inactive

Pull Requests (30d)
205
Issues (30d)
0
Star History
743 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.