temm1e  by temm1e-labs

Autonomous AI agent runtime for production systems

Created 3 weeks ago

New!

376 stars

Top 75.5% on SourcePulse

GitHubView on GitHub
Project Summary

An autonomous AI agent runtime built in Rust, SkyClaw addresses the limitations of traditional LLM frameworks by treating LLMs as finite cognitive entities rather than simple text generators. It offers production-grade resilience, procedural memory, and resource-aware context management, enabling agents to deploy once and operate indefinitely while learning and self-healing. This project targets developers and power users seeking robust, efficient, and continuously improving AI agents.

How It Works

SkyClaw's core innovation is the "Finite Brain Model," which treats the LLM's context window as limited working memory. Every resource, including tools and memory entries, has pre-calculated token costs, ensuring predictable resource consumption. A dynamic "Resource Budget Dashboard" is injected into the system prompt, allowing the LLM to track its cognitive capacity. When complex tasks exceed this budget, SkyClaw employs graceful degradation, scaling down to an outline or catalog listing rather than crashing. Procedural memory is managed via "Blueprints"—structured, replayable execution graphs that capture exact commands and decision points, enabling agents to learn from and repeat complex procedures accurately. Blueprint matching is optimized to avoid extra LLM calls by integrating with the existing message classifier, using grounded vocabularies and SQL lookups for efficiency.

Quick Start & Requirements

  • Primary install/run command: Clone the repository, build with cargo build --release, set TELEGRAM_BOT_TOKEN, run skyclaw auth login (for Codex OAuth), and then skyclaw start.
  • Prerequisites: Rust 1.82+, Chrome/Chromium (for the browser tool), and a Telegram bot token.
  • Setup: Estimated 30 seconds setup time if Rust and a Telegram token are available.
  • Links: Repository: https://github.com/nagisanzenin/skyclaw.git. Design docs are available within the docs/design/ directory.

Highlighted Details

  • Extreme Resource Efficiency: Achieves 15 MB idle RAM, 31ms cold start, and significantly less memory usage compared to alternatives like OpenClaw.
  • Robust Resilience: Features a 4-layer panic recovery system, zero panic paths in release builds, dead worker detection, and conversation persistence across restarts.
  • Codex OAuth Integration: Allows users to leverage their ChatGPT Plus/Pro subscription as an AI provider via OAuth PKCE, eliminating the need for separate API keys.
  • Self-Extending Tool System: Agents can discover, install, and utilize new tools (MCP servers) at runtime, dynamically expanding their capabilities.
  • Single-Call Classification: Optimizes LLM usage by performing message classification and response generation within a single call, reducing latency and cost.

Maintenance & Community

The project exhibits active development with frequent releases noted in March 2026. Community interaction and support are available via a Discord server.

Licensing & Compatibility

The project is licensed under the MIT license, which is permissive for commercial use and integration into closed-source projects.

Limitations & Caveats

The stealth browser tool requires Chrome or Chromium to be installed. While Codex OAuth is recommended for full functionality, alternative API key providers are supported. The rapid release cycle indicates ongoing development, which may lead to frequent changes.

Health Check
Last Commit

14 hours ago

Responsiveness

Inactive

Pull Requests (30d)
5
Issues (30d)
18
Star History
379 stars in the last 21 days

Explore Similar Projects

Feedback? Help us improve.