MEM1  by MIT-MI

LLM agents with efficient long-horizon reasoning and memory

Created 7 months ago
251 stars

Top 99.8% on SourcePulse

GitHubView on GitHub
Project Summary

MEM1 addresses the challenge of long-horizon interactions for LLM agents, which often suffer from unbounded memory growth, high computational costs, and degraded reasoning due to full-context prompting. This repository provides the official code for MEM1, an end-to-end reinforcement learning framework designed to enable agents to operate with constant memory. By synergizing memory consolidation and reasoning within a compact internal state, MEM1 optimizes both efficiency and performance, making it a promising solution for developing scalable, long-horizon interactive agents.

How It Works

MEM1 uses end-to-end RL to manage agent memory and reasoning. It updates a compact internal state each turn, integrating relevant memory and new observations while discarding irrelevant data. This maintains constant memory footprint, combating full-context prompting inefficiencies. The project also introduces a novel, scalable method for composing existing datasets into multi-turn environments for complex training scenarios.

Quick Start & Requirements

Setup requires conda with Python 3.9. Key dependencies include PyTorch (v2.4.0, cu121), vLLM (v0.6.3), and FlashAttention, installed via pip. An optional retriever environment uses Python 3.10 and specific libraries like faiss-gpu. The process involves data download, preprocessing, launching a retrieval server, training, and evaluation. Links to the arXiv paper, project site, demo video, and a Hugging Face checkpoint are provided.

Highlighted Details

MEM1-7B achieves a 3.5x performance gain and 3.7x memory reduction over Qwen2.5-14B-Instruct on a 16-objective multi-hop QA task. It generalizes beyond training horizons. The project has been accepted for oral presentations at NeurIPS 2025 Workshop MTI-LLM and COLM 2025 Workshop RAM2.

Maintenance & Community

Recognition via oral presentations at NeurIPS 2025 Workshop MTI-LLM and COLM 2025 Workshop RAM2 indicates active development and community interest. Specific community channels are not detailed, with the project site and arXiv paper serving as primary resources.

Licensing & Compatibility

The provided text does not specify the software license. Consequently, details on commercial use compatibility or other licensing restrictions are unavailable.

Limitations & Caveats

No explicit limitations, bugs, or alpha status are mentioned. The multi-step installation with specific dependencies may present a non-trivial setup challenge.

Health Check
Last Commit

1 month ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
28 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.