Discover and explore top open-source AI tools and projects—updated daily.
In-browser chat app for private AI conversations
Top 42.4% on SourcePulse
WebLLM Chat provides a private, server-free AI chat experience by running large language models (LLMs) directly in the user's browser using WebGPU. It targets users seeking enhanced privacy, offline accessibility, and the ability to interact with AI models without cloud dependencies, offering a user-friendly interface with features like markdown support and vision model integration.
How It Works
The project leverages WebLLM, a framework that enables LLMs to run natively in web browsers via WebGPU acceleration. This approach eliminates the need for server-side infrastructure, ensuring all data processing occurs locally. It also supports custom models hosted via MLC-LLM's REST API, offering flexibility for advanced users.
Quick Start & Requirements
yarn install
yarn dev
yarn build
or yarn export
docker build -t webllm_chat .
and docker run -d -p 3000:3000 webllm_chat
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
2 weeks ago
Inactive