Discover and explore top open-source AI tools and projects—updated daily.
LLM memory engine for context-aware AI
Top 38.8% on SourcePulse
The GibsonAI/memori repository provides an open-source memory engine designed to enhance Large Language Models (LLMs), AI agents, and multi-agent systems by enabling human-like memory capabilities. It addresses the challenge of LLMs lacking persistent context across conversations, allowing them to "remember" past interactions and information. This benefits developers building more sophisticated and context-aware AI applications by reducing repetitive context input and enabling more intelligent, personalized interactions.
How It Works
Memori operates through a "dual-mode" memory system: "Conscious Mode" for short-term working memory and "Auto Mode" for dynamic database search. Conscious Mode mimics human short-term memory by promoting key conversations to a readily accessible state, injected once at the start of a session. Auto Mode continuously analyzes user queries and searches the entire memory database for relevant context, injecting it with each LLM call. This approach leverages Pydantic for structured, validated memory processing and supports flexible database connections (SQLite, PostgreSQL, MySQL), aiming for a simple, reliable architecture.
Quick Start & Requirements
pip install memorisdk
export OPENAI_API_KEY="sk-your-openai-key-here"
). LiteLLM is recommended for easy integration with various LLM providers (pip install litellm
).Highlighted Details
Maintenance & Community
CONTRIBUTING.md
.Licensing & Compatibility
Limitations & Caveats
The README does not explicitly detail limitations, performance benchmarks, or potential scaling issues with very large memory databases. The "conscious ingest" feature's automatic promotion of "essential conversations" is described but lacks specific algorithmic details on how "importance" is determined.
1 day ago
Inactive