Discover and explore top open-source AI tools and projects—updated daily.
aiming-labEfficient lifelong memory for LLM agents
New!
Top 57.5% on SourcePulse
SimpleMem addresses the challenge of efficient long-term memory for LLM agents by employing a novel three-stage pipeline grounded in Semantic Lossless Compression. This approach maximizes information density and token utilization, offering a significant benefit to developers building LLM agents that require robust, scalable, and cost-effective memory management. The project targets researchers and engineers working with LLM agents who need to overcome the limitations of passive context accumulation or expensive iterative reasoning for memory handling.
How It Works
SimpleMem's core innovation is a three-stage pipeline designed for semantic lossless compression. Stage 1, Semantic Structured Compression, transforms unstructured dialogue into self-contained atomic facts with resolved coreferences and absolute timestamps, eliminating downstream reasoning overhead. Stage 2, Structured Indexing, organizes memory across semantic (vector embeddings), lexical (keyword index), and symbolic (metadata) layers for multi-granular retrieval. Stage 3, Adaptive Query-Aware Retrieval, dynamically adjusts retrieval scope based on query complexity, balancing comprehensive context with token efficiency. This pipeline maximizes information density and token utilization, offering a superior balance between performance and efficiency compared to existing methods.
Quick Start & Requirements
git clone https://github.com/aiming-lab/SimpleMem.git), navigate into the directory, and install dependencies using pip install -r requirements.txt.config.py to set your API key, desired LLM model (e.g., gpt-4.1-mini), and embedding model (e.g., Qwen/Qwen3-Embedding-0.6B).SimpleMemSystem, add dialogues via add_dialogue(), finalize encoding with finalize(), and query using ask(). Parallel processing options are available for large-scale operations.Highlighted Details
Maintenance & Community
The project has established a Discord server and WeChat group for collaboration and idea exchange. A paper detailing the methodology has been released on arXiv.
Licensing & Compatibility
The project is licensed under the MIT License, which is permissive for commercial use and integration into closed-source projects.
Limitations & Caveats
The provided README does not explicitly detail limitations, alpha status, or known bugs. Setup requires configuration of an OpenAI-compatible API key.
2 days ago
Inactive
microsoft