powermem  by oceanbase

AI long-term memory system for intelligent applications

Created 2 months ago
311 stars

Top 86.8% on SourcePulse

GitHubView on GitHub
Project Summary

Summary PowerMem provides an AI-powered long-term memory system for LLMs, addressing challenges in retaining conversational history, user preferences, and context. Targeting AI developers, it offers an accurate, agile, and affordable solution via hybrid storage and intelligent memory management, enhancing LLM performance and reducing costs.

How It Works PowerMem utilizes a hybrid storage architecture combining vector retrieval, full-text search, and graph databases. Its novelty lies in integrating the Ebbinghaus forgetting curve theory for dynamic memory retention, prioritizing recent information through time-decay weighting. It also features comprehensive multi-agent support for isolation, collaboration, and fine-grained permissions, aiming to build robust AI memory infrastructure.

Quick Start & Requirements Installation is straightforward via pip install powermem, with configuration managed through .env files. It offers a Python SDK for direct integration, a production-ready HTTP API server deployable via CLI, Docker, or Docker Compose, and an MCP server requiring uvx (installable via uvx powermem-mcp sse). Official guides are available for Getting Started, API Server usage, and MCP Server integration.

Highlighted Details

  • Performance Claims: Achieves [48.77% Accuracy Improvement] over full-context on LOCOMO benchmark (78.70% vs 52.9%), offers [91.83% Faster Response] (1.44s vs 17.12s p95 latency), and enables [96.53% Token Reduction] (0.9k vs 26k).
  • Intelligent Memory Management: Features LLM-based extraction of key facts, automatic duplicate detection, conflict resolution, and merging of related memories.
  • Ebbinghaus Forgetting Curve: Implements time-decay weighting to naturally "forget" outdated information, prioritizing retention of recent and relevant data.
  • Multi-Agent Support: Provides agent memory isolation, cross-agent collaboration, shared memory capabilities, and flexible permission controls.
  • Hybrid Retrieval: Combines vector, full-text, and graph retrieval, supporting multi-hop graph traversal for complex relationship queries.
  • Multimodal Support: Handles text, image, and audio memory by converting non-textual data into descriptions for storage and retrieval.
  • Sub Stores: Implements data partitioning for improved query performance and resource utilization in ultra-large-scale datasets.

Maintenance & Community The project is actively developed, with release notes indicating incremental feature additions such as the API server, user profiles, and multimodal support. Support and discussion are primarily handled through GitHub Issues and GitHub Discussions.

Licensing & Compatibility This project is licensed under the Apache License 2.0, which is generally permissive for commercial use and integration into closed-source applications.

Limitations & Caveats The provided README focuses on features and benefits, and does not explicitly detail limitations, known bugs, or alpha/beta status. The release history suggests ongoing development and feature expansion.

Health Check
Last Commit

2 days ago

Responsiveness

Inactive

Pull Requests (30d)
15
Issues (30d)
35
Star History
141 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Travis Fischer Travis Fischer(Founder of Agentic), and
2 more.

Memori by MemoriLabs

1.0%
12k
LLM memory engine for context-aware AI
Created 5 months ago
Updated 3 days ago
Feedback? Help us improve.