claude-context-os  by Arkya-AI

Claude OS for persistent, reliable multi-session AI work

Created 2 weeks ago

New!

281 stars

Top 92.9% on SourcePulse

GitHubView on GitHub
Project Summary

This project addresses the critical issue of context loss and unreliability in multi-session work with Claude, a problem that plagues long-running projects. It offers a research-backed "operating system" for Claude that mimics persistent memory through effective, token-efficient session handoffs. The target audience includes consultants, developers, researchers, sales professionals, and no-code builders who require dependable continuity across days or weeks of work, providing a significant benefit by ensuring Claude retains crucial details and decision rationale.

How It Works

Claude Context OS v4 is built on recent LLM research, drastically simplifying the system prompt to 47 lines and 7 core rules, down from v3's 15+ rules. This design minimizes interference with Claude's instruction compliance, as research indicates LLMs degrade with more than 5-10 constraints. Explanatory prose has been removed entirely from the main OS file (CLAUDE.md) because studies show it actively competes with and degrades performance on target tasks. Instead, it employs plain imperatives and a "just-in-time" context loading strategy, where Claude loads only the specific documentation required for the current task type, optimizing token usage and attention. Session handoffs are managed via an explicit index, detailing precisely which files to load for the next session, preventing the need to re-process extensive prior context.

Quick Start & Requirements

  • Primary install / run command: Copy CLAUDE.md to your project root, copy the templates/ and docs/context/ directories, and copy .claude/commands/ for slash commands. Edit the Identity section in CLAUDE.md for your workflow and populate docs/context/ files for your domain. Start a Claude Code session.
  • Non-default prerequisites and dependencies: None explicitly mentioned beyond the Claude Code environment.
  • Estimated setup time or resource footprint: Minimal setup time, focused on configuration and populating domain-specific context files.
  • Links: The repository's README serves as the primary documentation.

Highlighted Details

  • Reduced complexity: System prompt is now 47 lines and 7 rules, down from 327 lines and 15+ rules in v3.
  • Research-driven design: Every choice maps to findings on LLM working memory, prompt interference, and attention curves.
  • Just-in-time context loading: Dynamically loads relevant documentation based on work type (e.g., proposals, content strategy, workshops).
  • Structured Task Definition template: Inspired by atomic task structures, defining context, action, verification, and completion criteria.
  • Handoff index layer: Explicit manifest for next session's required files, enhancing continuity.

Maintenance & Community

Contributions via PRs and issues are welcomed. The author expresses interest in seeing forks adapted for specific workflows like academic research, legal, or creative writing. No specific community channels (Discord/Slack) or roadmap links are provided in the README.

Licensing & Compatibility

  • License type: MIT License.
  • Compatibility notes: The MIT license is permissive and generally compatible with commercial use and linking within closed-source projects.

Limitations & Caveats

The project's core purpose is to mitigate Claude's inherent context loss issues for multi-session work. While v4 is presented as a significant improvement based on research, its effectiveness relies on the user correctly configuring and populating the domain-specific context files within docs/context/. The README does not detail specific unsupported platforms or known bugs, but the underlying challenge of LLM context management remains a factor.

Health Check
Last Commit

1 week ago

Responsiveness

Inactive

Pull Requests (30d)
1
Issues (30d)
1
Star History
282 stars in the last 16 days

Explore Similar Projects

Feedback? Help us improve.