claude-cognitive  by GMaN1911

AI coding assistant working memory and coordination system

Created 1 week ago

New!

395 stars

Top 73.0% on SourcePulse

GitHubView on GitHub
Project Summary

This project provides Claude Code with persistent working memory through a Context Router and a Pool Coordinator, addressing the statelessness inherent in large codebases. It significantly reduces token waste and prevents repetitive work, thereby boosting developer productivity and enabling long-running, collaborative AI coding sessions.

How It Works

The Context Router injects relevant files into Claude's context based on attention dynamics and cognitive tiers (HOT, WARM, COLD). Files decay when not mentioned but reactivate on keywords or co-activation with related files. HOT files are fully injected, while WARM files are header-only, leading to substantial token usage reduction. The Pool Coordinator shares state across multiple Claude Code instances, facilitating long-running sessions and preventing duplicate efforts through automatic or manual coordination mechanisms.

Quick Start & Requirements

Installation involves cloning the repository and copying scripts to ~/.claude/scripts/, then merging hook configurations. Project initialization requires creating a local .claude directory with templates and editing project-specific markdown files. Users must set a unique CLAUDE_INSTANCE environment variable per terminal or globally in .bashrc. Verification uses provided Python scripts to check context injection or pool activity. Customizing keywords in context-router-v2.py is recommended for optimal savings. No specific hardware or advanced software prerequisites are listed beyond standard Python and bash environments. Official guides include SETUP.md and CUSTOMIZATION.md.

Highlighted Details

  • Achieves 64-95% average token savings, reducing context size from examples like 120K to 25K characters.
  • Enhances developer experience: immediate productivity, zero hallucinated integrations, and no duplicate work across 8+ concurrent instances.
  • Validated on a 1+ million line codebase, a 4-node distributed architecture, 8 concurrent Claude Code instances, and multi-day sessions.
  • v1.1+ adds attention history tracking, logging file states (HOT/WARM/COLD) per turn for analysis of development trajectories and attention behavior.

Maintenance & Community

The project is primarily maintained by Garret Sutherland of MirrorEthic LLC. The roadmap outlines ongoing development, including graph visualization, collision detection, and semantic relevance features. Community contributions via issues and PRs are welcomed. Enterprise services, custom implementations, and training are available by contacting gsutherland@mirrorethic.com.

Licensing & Compatibility

The project is released under the permissive MIT License, allowing for unrestricted use, modification, and distribution, including within closed-source commercial applications.

Limitations & Caveats

Current context routing relies on keyword matching; semantic relevance is planned for future releases. The system is specifically designed for Claude Code, limiting its direct applicability to other LLM assistants without modification. Advanced features like collision detection are still under development.

Health Check
Last Commit

2 days ago

Responsiveness

Inactive

Pull Requests (30d)
3
Issues (30d)
3
Star History
397 stars in the last 12 days

Explore Similar Projects

Feedback? Help us improve.