BrowserAI  by sauravpanda

SDK for running local LLMs in the browser

Created 8 months ago
1,220 stars

Top 32.2% on SourcePulse

GitHubView on GitHub
Project Summary

BrowserAI enables running production-ready Large Language Models (LLMs) directly within the user's web browser, offering a private, fast, and zero-server-cost solution. It targets web developers building AI applications, companies requiring privacy-conscious AI, researchers, and hobbyists. The primary benefit is leveraging powerful AI models locally without complex infrastructure or data privacy concerns.

How It Works

BrowserAI utilizes WebGPU for hardware-accelerated inference, achieving near-native performance for LLMs. It supports both the MLC and Transformers.js engines, allowing seamless switching between them and offering pre-optimized popular models. This approach democratizes AI deployment by eliminating server costs and enabling offline capabilities after initial model download.

Quick Start & Requirements

  • Install via npm: npm install @browserai/browserai or yarn: yarn add @browserai/browserai.
  • Requires a modern browser with WebGPU support (Chrome 113+, Edge 113+, or equivalents).
  • Hardware must support 16-bit floating-point operations for models with f16 requirements.
  • Documentation
  • Live Demo

Highlighted Details

  • 100% private processing occurs locally in the browser.
  • WebGPU acceleration provides high inference speeds.
  • Supports seamless switching between MLC and Transformers engines.
  • Includes speech recognition (Whisper) and text-to-speech (Kokoro-TTS) capabilities.
  • Offers structured output generation with JSON schemas.

Maintenance & Community

  • Active development with a roadmap outlining future enhancements.
  • Discord Community available for support and discussion.
  • Project is open source and welcomes contributions.

Licensing & Compatibility

  • Licensed under the MIT License.
  • Compatible with commercial use and closed-source linking due to permissive licensing.

Limitations & Caveats

BrowserAI's performance and model availability are dependent on the user's browser and hardware capabilities, particularly WebGPU support. Some models may have specific hardware requirements, such as f16 support.

Health Check
Last Commit

1 week ago

Responsiveness

1 day

Pull Requests (30d)
3
Issues (30d)
1
Star History
25 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems").

JittorLLMs by Jittor

0.0%
2k
Low-resource LLM inference library
Created 2 years ago
Updated 6 months ago
Starred by Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), Gabriel Almeida Gabriel Almeida(Cofounder of Langflow), and
2 more.

torchchat by pytorch

0.1%
4k
PyTorch-native SDK for local LLM inference across diverse platforms
Created 1 year ago
Updated 1 week ago
Starred by Sourabh Bajaj Sourabh Bajaj(Cofounder of Uplimit), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
3 more.

NextChat by ChatGPTNextWeb

0.1%
86k
AI assistant for web, iOS, MacOS, Android, Linux, and Windows
Created 2 years ago
Updated 3 days ago
Feedback? Help us improve.