Discover and explore top open-source AI tools and projects—updated daily.
feelingai-teamAI agent memory framework for enhanced reasoning and recall
Top 96.8% on SourcePulse
MemBrain is an open-source framework designed to provide AI agents with native, robust memory capabilities. It addresses the critical need for agents to store, retrieve, and reason over past interactions and knowledge, enabling more sophisticated and context-aware AI behaviors. Targeted at developers building advanced AI agents and researchers in artificial intelligence, MemBrain offers a structured approach to memory management, aiming to enhance agent performance and coherence.
How It Works
MemBrain functions as an agent-native memory system, leveraging a Pydantic AI-powered backend. Its core architecture involves an LLM service for memory extraction and reasoning, a PostgreSQL database for persistent storage, and separate embedding and reranking services (which can be local, e.g., vLLM, or online alternatives). This design allows for efficient processing of conversational data into retrievable memories, facilitating complex reasoning and long-term context retention crucial for advanced agentic applications.
Quick Start & Requirements
.env for LLM, database, backend, and embedding/reranking services. Optionally, set up local embedding and reranking services using provided Docker configurations. Start the PostgreSQL database, install dependencies with uv sync, and launch the backend server with uv run backend.http://localhost:9574/health.Highlighted Details
Maintenance & Community
The repository is actively being polished, with a feature roadmap expected soon. While specific community channels like Discord or Slack are not detailed, the project references inspiration and code from EverMemOS and Graphiti.
Licensing & Compatibility
This project is licensed under the Apache 2.0 license, which is generally permissive for commercial use and integration into closed-source projects.
Limitations & Caveats
The project is still under active development, with some features like the roadmap being "Coming Soon." The setup requires careful configuration of multiple external services (LLM, database, embedding, reranking), and running local embedding/reranking services necessitates managing Docker containers and model weights.
4 days ago
Inactive