Discover and explore top open-source AI tools and projects—updated daily.
idosalBrowser-native autonomous agent PoC using an open-source LLM
Top 67.5% on SourcePulse
AgentLLM is a proof-of-concept for browser-native autonomous agents, targeting researchers and developers interested in on-device LLM capabilities. It demonstrates that embedded LLMs can handle complex goal-oriented tasks with acceptable performance, offering a privacy-preserving and cost-effective alternative to server-based agents.
How It Works
AgentLLM leverages WebLLM and WebGPU to run LLM inference directly in the browser, utilizing the GPU for significant performance gains over CPU-based methods. It modifies the AgentGPT project, replacing ChatGPT with WizardLM and altering the prompt mechanism. This approach allows agents to perform arbitrary goals by generating and executing tasks in a loop, without external tools, simplifying complexity and providing a user-friendly GUI for rapid prototyping.
Quick Start & Requirements
--enable-dawn-features=disable_robustness for performance. Navigate to the AgentLLM web interface../setup.sh --docker or ./setup.sh --docker-compose.npm install, create .env (with NEXTAUTH_SECRET, NEXTAUTH_URL, OPENAI_API_KEY), run ./prisma/useSqlite.sh (optional), npx prisma db push, and npm run dev../setup.sh --local, add API key, and open in browser.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
This project is a proof-of-concept utilizing experimental technologies and is not production-ready. Performance may vary significantly based on device capabilities, with lower-tier devices potentially unable to run the demo. An OpenAI API key is required for setup, despite the focus on browser-native LLMs.
2 years ago
Inactive
TransformerOptimus
Significant-Gravitas