Discover and explore top open-source AI tools and projects—updated daily.
Cognitive OS for autonomous AI agents
Top 94.6% on SourcePulse
This project provides a cognitive operating system designed to enable AI agents to autonomously interact with local system capabilities. It targets builders, researchers, and indie hackers seeking to empower AI agents with real-world task execution, abstracting complex system interactions into a unified protocol.
How It Works
llmbasedos exposes system capabilities (files, mail, APIs) via a Model Context Protocol (MCP), a JSON-RPC layer over UNIX sockets and WebSockets. This MCP acts as a unified abstraction, allowing LLM-agnostic interaction with various models (OpenAI, Gemini, LLaMA.cpp) and local services like file system access, email handling, and rclone synchronization. Agent workflows are implemented as Python scripts, leveraging an mcp_call()
function for seamless integration, promoting flexibility and debuggability over traditional YAML-based approaches.
Quick Start & Requirements
llmbasedos_src/
, add .env
, lic.key
, mail_accounts.yaml
, and user files, then run docker compose build
and docker compose up
.Highlighted Details
Maintenance & Community
The project welcomes stars, forks, PRs, and experiments. Community engagement channels are not explicitly listed.
Licensing & Compatibility
The README does not explicitly state a license. The mention of "license tiers" and lic.key
suggests a potential commercial or tiered licensing model, which may have restrictions on commercial use or closed-source linking.
Limitations & Caveats
The project is described as a "cognitive operating system" with a roadmap towards intention-based execution, implying it is still under active development and may not be production-ready for all use cases. The legacy YAML workflow engine is slated for deprecation.
4 weeks ago
Inactive