Discover and explore top open-source AI tools and projects—updated daily.
adoreseverAgent context compression and persistent memory
Top 68.2% on SourcePulse
Summary
This project addresses the critical issues of context explosion and cross-session amnesia in AI agents, particularly within the OpenClaw framework. It provides a Knowledge Graph Context Engine that compresses conversation history by up to 75% and enables agents to recall and reuse knowledge across different interactions. The primary benefit is creating AI agents that learn from experience, offering a more persistent and intelligent conversational capability for developers and power users.
How It Works
Graph-memory constructs a typed property graph from conversations, representing tasks, skills, and events with relationships like USED_SKILL and SOLVED_BY. It employs Personalized PageRank (PPR) for relevance ranking and community detection to cluster related knowledge. A novel dual-path recall mechanism merges precise entity-level search with generalized community-level semantic matching. Additionally, it incorporates "episodic context" by linking graph nodes to their original conversation snippets, enhancing recall accuracy and providing a richer understanding of past interactions.
Quick Start & Requirements
graph-memory-installer-win-x64.exe from Releases) or pnpm openclaw plugins install graph-memory via npm."contextEngine": "graph-memory" in ~/.openclaw/openclaw.json and configuring LLM (config.llm) and embedding (config.embedding) API credentials.Highlighted Details
Maintenance & Community
The project is actively developed, with v2.0 introducing significant enhancements. Development follows standard GitHub/npm workflows. No specific community channels (e.g., Discord, Slack) or external sponsorships are mentioned in the README.
Licensing & Compatibility
Limitations & Caveats
This plugin is strictly dependent on the OpenClaw environment. Full functionality, particularly semantic search and community recall, requires proper configuration of LLM and embedding services with valid API keys and endpoints. Without embedding configuration, the system falls back to less sophisticated FTS5 full-text search.
4 days ago
Inactive
milla-jovovich