Interpreted-Context-Methdology  by RinDig

Filesystem structure as agent architecture

Created 1 month ago
301 stars

Top 88.5% on SourcePulse

GitHubView on GitHub
Project Summary

This project offers a novel approach to AI agent orchestration by leveraging filesystem structure and plain text files, replacing complex framework-level coordination. It targets users needing simpler, more transparent, and human-reviewable sequential workflows, providing a "glass-box" system where the filesystem itself dictates agent actions and state, reducing complexity and enhancing auditability.

How It Works

ICM replaces multi-agent frameworks with a structured directory hierarchy where numbered folders represent sequential stages and Markdown files contain prompts and context. A single AI agent reads specific files based on its current stage, guided by layered context loading (System, Global Context, Stage Context, References, Working Artifacts). This approach mirrors Unix pipeline principles, using plain text as a universal interface for maximum inspectability and editability. Each stage defines an explicit contract (Inputs, Process, Outputs) in its CONTEXT.md, ensuring clear data flow and enabling human intervention at any point.

Quick Start & Requirements

To begin, clone the repository. Navigate to a specific workspace (e.g., workspaces/script-to-animation) and run the setup command within the workspace directory. This initiates an onboarding questionnaire that configures the workspace. Subsequent stages are run sequentially, with outputs from one stage serving as inputs for the next. No specific hardware or advanced software prerequisites beyond standard Python environments are mentioned, implying broad compatibility. Links to available workspaces and a workspace-builder are provided within the repository.

Highlighted Details

  • Filesystem as Architecture: Folder structure dictates agent execution order and context scoping, eliminating the need for explicit orchestration code.
  • Plain Text Interface: All prompts, context, and intermediate artifacts are plain Markdown files, ensuring universal accessibility and editability.
  • Layered Context Loading: Agents load only necessary context (2k-8k tokens typical), optimizing model performance compared to monolithic approaches.
  • Human-in-the-Loop: Every intermediate output is an editable file, allowing human review and modification before the next stage proceeds.
  • "Configure the Factory, Not the Product": Initial setup (questionnaire) defines system-level preferences, ensuring consistent output generation across runs.

Maintenance & Community

The README focuses on the technical implementation and contribution model (new workspaces, bug fixes). Specific details regarding active maintainers, community channels (like Discord/Slack), sponsorships, or a public roadmap are not provided. Contributions are primarily through adding new workspaces or improving the core builder.

Licensing & Compatibility

The project is released under the MIT License, which is permissive and generally compatible with commercial use and closed-source linking, allowing broad adoption without significant licensing restrictions.

Limitations & Caveats

ICM is not designed for real-time, dynamic multi-agent collaboration requiring tight communication loops, as file-based handoffs are too slow. It is also ill-suited for high-concurrency systems needing robust queuing and state isolation, being fundamentally local-first. Complex, automated branching logic mid-pipeline is awkward to implement within this structure, though human-driven branching between stages is supported.

Health Check
Last Commit

3 weeks ago

Responsiveness

Inactive

Pull Requests (30d)
2
Issues (30d)
1
Star History
200 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.