graph-memory  by adoresever

Agent context compression and persistent memory

Created 1 month ago
436 stars

Top 68.2% on SourcePulse

GitHubView on GitHub
Project Summary

Summary

This project addresses the critical issues of context explosion and cross-session amnesia in AI agents, particularly within the OpenClaw framework. It provides a Knowledge Graph Context Engine that compresses conversation history by up to 75% and enables agents to recall and reuse knowledge across different interactions. The primary benefit is creating AI agents that learn from experience, offering a more persistent and intelligent conversational capability for developers and power users.

How It Works

Graph-memory constructs a typed property graph from conversations, representing tasks, skills, and events with relationships like USED_SKILL and SOLVED_BY. It employs Personalized PageRank (PPR) for relevance ranking and community detection to cluster related knowledge. A novel dual-path recall mechanism merges precise entity-level search with generalized community-level semantic matching. Additionally, it incorporates "episodic context" by linking graph nodes to their original conversation snippets, enhancing recall accuracy and providing a richer understanding of past interactions.

Quick Start & Requirements

  • Primary install: Use the Windows one-click installer (graph-memory-installer-win-x64.exe from Releases) or pnpm openclaw plugins install graph-memory via npm.
  • Prerequisites: OpenClaw (v2026.3.x+), Node.js 22+.
  • Configuration: Essential steps include setting "contextEngine": "graph-memory" in ~/.openclaw/openclaw.json and configuring LLM (config.llm) and embedding (config.embedding) API credentials.
  • Verification: Check gateway logs for readiness messages and verify database ingestion and knowledge extraction.
  • Links: Releases for installer.

Highlighted Details

  • Achieves up to 75% context compression in conversations.
  • Features community-aware dual-path recall (precise entity-level and generalized community-level).
  • Integrates episodic context (original conversation snippets) for faithful context reconstruction.
  • Supports universal embedding providers via OpenAI-compatible endpoints using a fetch-based module.
  • Offers a convenient one-click installer for Windows users.

Maintenance & Community

The project is actively developed, with v2.0 introducing significant enhancements. Development follows standard GitHub/npm workflows. No specific community channels (e.g., Discord, Slack) or external sponsorships are mentioned in the README.

Licensing & Compatibility

  • License: MIT License.
  • Compatibility: The MIT license is permissive, allowing for commercial use and integration into closed-source projects without significant restrictions.

Limitations & Caveats

This plugin is strictly dependent on the OpenClaw environment. Full functionality, particularly semantic search and community recall, requires proper configuration of LLM and embedding services with valid API keys and endpoints. Without embedding configuration, the system falls back to less sophisticated FTS5 full-text search.

Health Check
Last Commit

4 days ago

Responsiveness

Inactive

Pull Requests (30d)
11
Issues (30d)
39
Star History
417 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.