Discover and explore top open-source AI tools and projects—updated daily.
answerytLLM-native agent framework with document-centric context management
Top 69.7% on SourcePulse
Fat-Cat presents an LLM-native operating system framework designed to overcome critical challenges in current agent development: context management and control flow fragility. It targets developers building sophisticated LLM agents by offering a robust, debuggable, and evolving system that moves beyond traditional, state-heavy JSON-based approaches. Fat-Cat aims to make agent "thinking" transparent and persistent, enabling agents to learn and adapt over time.
How It Works
Fat-Cat re-imagines LLM agents by treating the LLM as the CPU, documents as memory (RAM), and tools as peripherals, with the framework acting as the kernel. It replaces fragmented JSON state management with Markdown documents as the global context, where each stage's output becomes a revision. This document-centric approach facilitates human readability and debuggability. The core operates through a four-stage metacognitive loop: Stage 1 analyzes intent and constraints, Stage 2 retrieves or dynamically learns strategies (triggering internet searches for new methodologies if needed), Stage 3 decomposes strategies into precise steps, and Stage 4 executes these steps. A Watcher Agent monitors execution for errors and deviations, providing runtime reflection and intervention capabilities.
Quick Start & Requirements
git clone https://github.com/your-repo/fat-cat.git), navigate into the directory, and run python scripts/install_full_pipeline_deps.py.requirements-full.txt. Configuration of LLM API keys is required in config/model_config.py. The framework is optimized for and recommends long-context models (32k+ context, e.g., Kimi-K2).https://github.com/your-repo/fat-cat.git (placeholder).Highlighted Details
Maintenance & Community
No specific details regarding contributors, sponsorships, community channels (like Discord/Slack), or roadmaps were provided in the README.
Licensing & Compatibility
Limitations & Caveats
The framework's effectiveness is heavily dependent on the capabilities of the underlying LLM, particularly its long-context window support. The provided README uses a placeholder URL for the GitHub repository, and crucial licensing information is absent, posing potential adoption blockers.
1 week ago
Inactive