Private AI chat app for running LLMs in the browser
Top 48.4% on sourcepulse
Chatty provides a private, in-browser interface for interacting with large language models (LLMs), aiming to replicate the user experience of popular AI chat platforms. It targets users who prioritize data privacy and offline functionality, enabling them to run models like Gemma, Llama, and Mistral directly on their local hardware without server-side processing.
How It Works
Chatty leverages WebGPU for client-side LLM execution, allowing models to run directly within the browser. This approach ensures data privacy as all processing occurs locally. It utilizes Xenova's Transformer.js for model inference and LangChain.js for features like file-based Q&A and custom memory, enabling offline use and local document analysis.
Quick Start & Requirements
npm install
and npm run dev
for local development.Highlighted Details
Maintenance & Community
Maintained by Addy Osmani and Jakob Hoeg Mørk. Contributions are welcome via contributing guidelines.
Licensing & Compatibility
The project appears to be MIT licensed, allowing for commercial use and integration with closed-source applications.
Limitations & Caveats
WebGPU support is primarily for Chrome and Edge, with experimental support in Firefox. The Dockerfile is not optimized for production. The roadmap indicates future support for multiple file embeddings and a prompt management system.
3 months ago
Inactive