A-mem-sys  by WujiangXu

Dynamic memory organization for LLM agents

Created 7 months ago
272 stars

Top 94.9% on SourcePulse

GitHubView on GitHub
Project Summary

A-MEM is an agentic memory system designed to enhance Large Language Model (LLM) agents by providing dynamic memory organization and flexible interaction capabilities. It addresses the limitations of traditional memory systems by enabling agents to effectively leverage historical experiences through sophisticated organization and retrieval. The system is targeted at developers building complex LLM agents, offering a significant benefit in managing and utilizing agent memories for improved performance on real-world tasks.

How It Works

The core approach leverages Zettelkasten principles for dynamic memory organization. When new memories are added, an LLM analyzes the content to generate keywords, context, and tags. These, along with the original content, are used to create enhanced vector embeddings, which are then stored semantically in ChromaDB. The system analyzes historical memories for relevant connections using these embeddings and establishes dynamic links based on content and metadata similarities. This facilitates continuous memory evolution and refinement, enabling adaptive memory management driven by agent decision-making. This approach offers superior retrieval and relationship analysis compared to static memory systems.

Quick Start & Requirements

  • Primary Install: Clone https://github.com/agiresearch/A-mem.git, activate a Python virtual environment, and run pip install .. For development, use pip install -e ..
  • Prerequisites: Python. LLM backend configuration is required:
    • OpenAI: Requires an OpenAI API key.
    • Ollama: Requires Ollama to be installed and running locally.
    • SGLang: Requires pip install "sglang[all]" and launching a SGLang server (python -m sglang.launch_server ...).
    • OpenRouter: Requires an OpenRouter API key (or OPENROUTER_API_KEY environment variable).
  • Dependencies: ChromaDB (for vector storage), specific LLM client libraries based on backend choice.
  • Links:

Highlighted Details

  • Dynamic memory organization based on Zettelkasten principles.
  • Intelligent indexing and linking of memories via ChromaDB using enhanced embeddings (content + metadata).
  • Comprehensive note generation with structured attributes (keywords, context, tags) via LLM analysis.
  • Support for multiple LLM backends: OpenAI, Ollama, SGLang, OpenRouter.
  • Empirical experiments demonstrate superior performance compared to SOTA baselines.

Maintenance & Community

No explicit mentions of core maintainers, community channels (Discord/Slack), sponsorships, or roadmaps are present in the provided README.

Licensing & Compatibility

  • License: MIT License.
  • Compatibility: The MIT license is permissive, allowing for commercial use and integration within closed-source projects without significant restrictions.

Limitations & Caveats

Setting up specific LLM backends, particularly SGLang and local Ollama, requires additional installation and server configuration steps beyond the basic Python package installation. Performance claims are substantiated by empirical experiments detailed in a separate paper and repository, distinct from the primary code repository.

Health Check
Last Commit

3 months ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
1
Star History
33 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.