Discover and explore top open-source AI tools and projects—updated daily.
open-jarvisAI stack for local-first personal agents
New!
Top 49.7% on SourcePulse
Summary OpenJarvis addresses the growing reliance on cloud APIs for personal AI agents by providing a local-first software stack. It targets engineers, researchers, and power users seeking to build or deploy AI agents that prioritize on-device processing, reducing cloud dependency and improving intelligence efficiency. The framework enables agents to run locally by default, calling the cloud only when necessary.
How It Works This project is an opinionated framework built around three core ideas: shared primitives for on-device agents, evaluations treating energy, FLOPs, latency, and cost as first-class constraints, and a learning loop refining models with local trace data. Its local-first approach leverages advancements in on-device language models, aiming for practical, efficient, and private personal AI.
Quick Start & Requirements
Primary installation involves cloning the repository (git clone https://github.com/open-jarvis/OpenJarvis.git), navigating into the directory (cd OpenJarvis), and synchronizing dependencies (uv sync). A local inference backend such as Ollama, vLLM, SGLang, or llama.cpp is required. Non-default prerequisites include Python 3.10+ and a Rust toolchain for development. The quick start guide involves uv run jarvis init to auto-detect hardware and configure the environment, followed by installing and starting an inference server like Ollama (https://ollama.com), pulling a model, and running queries. uv run jarvis doctor can diagnose setup issues.
Highlighted Details OpenJarvis is part of the "Intelligence Per Watt" research initiative, which found that local language models already handle 88.7% of single-turn chat and reasoning queries. The project aims to make local AI practical, with intelligence efficiency improving 5.3x from 2023 to 2025. It serves as both a research platform and a production foundation for local AI, akin to PyTorch.
Maintenance & Community The project is developed at Hazy Research and the Scaling Intelligence Lab at Stanford SAIL. Notable sponsors include Laude Institute, Stanford Marlowe, Google Cloud Platform, Lambda Labs, Ollama, IBM Research, and Stanford HAI. No specific community channels (e.g., Discord, Slack) are mentioned in the provided text.
Licensing & Compatibility The project is licensed under the Apache 2.0 license. This license is generally permissive and compatible with commercial use and closed-source linking.
Limitations & Caveats No explicit limitations, alpha status, or known bugs are detailed in the provided README content. The project is presented as a research platform and a production foundation.
15 hours ago
Inactive