cass_memory_system  by Dicklesworthstone

AI coding agents learn from each other via persistent, unified memory

Created 3 months ago
274 stars

Top 94.4% on SourcePulse

GitHubView on GitHub
Project Summary

This project addresses the critical problem of knowledge loss in AI coding agents, where valuable insights and learned patterns are trapped within individual sessions and isolated across different agents. cass-memory provides a persistent, cross-agent memory system that transforms scattered session history into actionable, confidence-tracked rules, enabling every agent to learn from every other agent's experience. It benefits AI agents, developers, teams, and power users by creating institutional memory that enhances efficiency and learning.

How It Works

cass-memory implements a three-layer cognitive architecture inspired by human expertise development. Episodic Memory, powered by the cass search engine, stores raw session logs as ground truth. These logs are processed into Working Memory through structured session summaries called Diary entries. Finally, the Procedural Memory layer distills these summaries into actionable rules, known as Playbook bullets, which are tracked for confidence and decay. This approach ensures that knowledge is not only stored but also refined, validated, and made readily accessible to agents before they start tasks.

Quick Start & Requirements

Installation is straightforward via a one-liner script (curl ... | bash), Homebrew (brew install dicklesworthstone/tap/cm), or Scoop (scoop install dicklesworthstone/cm). A prerequisite is the cass CLI for the episodic memory layer. Optional LLM API keys (e.g., ANTHROPIC_API_KEY) are required for AI-powered reflection and rule extraction. Initial setup involves running cm init to create configuration and a playbook, with starter playbooks available for various languages (cm init --starter typescript).

Highlighted Details

  • Cross-Agent Learning: Unifies knowledge from diverse agents (Claude Code, Cursor, Codex, etc.) into a single, searchable playbook.
  • Confidence Decay System: Rules automatically lose confidence over time if not revalidated, with a 90-day half-life, preventing stale knowledge.
  • Anti-Pattern Learning: Rules marked as harmful multiple times are automatically inverted into warnings, preventing recurring mistakes.
  • Scientific Validation: New rules undergo an evidence gate, requiring validation against cass history before acceptance.
  • Agent-Native Onboarding: Leverages existing AI agents to analyze sessions and extract rules at no additional LLM cost.
  • Trauma Guard: A safety system that learns from past dangerous command incidents and prevents their recurrence via runtime hooks.

Maintenance & Community

The project is maintained by a single author who explicitly states they "do not accept outside contributions" due to bandwidth limitations. While issues can be submitted, pull requests will not be merged directly but may inform the author's own implementation. No community channels like Discord or Slack are listed. A roadmap is provided within the README.

Licensing & Compatibility

The project is released under the MIT License, with an additional "OpenAI/Anthropic Rider" clause. This license generally permits commercial use and linking with closed-source software, though the specifics of the "Rider" are not detailed in the README.

Limitations & Caveats

The project is strictly local-first, with no cloud sync or real-time collaboration features. AI-powered reflection and advanced features depend on configured LLM API keys and incur associated costs. The system advises agents but does not execute rules directly. The author maintains sole control over contributions, limiting community involvement in development.

Health Check
Last Commit

6 days ago

Responsiveness

Inactive

Pull Requests (30d)
1
Issues (30d)
6
Star History
61 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.