BrowserAI  by sauravpanda

SDK for running local LLMs in the browser

created 6 months ago
1,176 stars

Top 33.8% on sourcepulse

GitHubView on GitHub
Project Summary

BrowserAI enables running production-ready Large Language Models (LLMs) directly within the user's web browser, offering a private, fast, and zero-server-cost solution. It targets web developers building AI applications, companies requiring privacy-conscious AI, researchers, and hobbyists. The primary benefit is leveraging powerful AI models locally without complex infrastructure or data privacy concerns.

How It Works

BrowserAI utilizes WebGPU for hardware-accelerated inference, achieving near-native performance for LLMs. It supports both the MLC and Transformers.js engines, allowing seamless switching between them and offering pre-optimized popular models. This approach democratizes AI deployment by eliminating server costs and enabling offline capabilities after initial model download.

Quick Start & Requirements

  • Install via npm: npm install @browserai/browserai or yarn: yarn add @browserai/browserai.
  • Requires a modern browser with WebGPU support (Chrome 113+, Edge 113+, or equivalents).
  • Hardware must support 16-bit floating-point operations for models with f16 requirements.
  • Documentation
  • Live Demo

Highlighted Details

  • 100% private processing occurs locally in the browser.
  • WebGPU acceleration provides high inference speeds.
  • Supports seamless switching between MLC and Transformers engines.
  • Includes speech recognition (Whisper) and text-to-speech (Kokoro-TTS) capabilities.
  • Offers structured output generation with JSON schemas.

Maintenance & Community

  • Active development with a roadmap outlining future enhancements.
  • Discord Community available for support and discussion.
  • Project is open source and welcomes contributions.

Licensing & Compatibility

  • Licensed under the MIT License.
  • Compatible with commercial use and closed-source linking due to permissive licensing.

Limitations & Caveats

BrowserAI's performance and model availability are dependent on the user's browser and hardware capabilities, particularly WebGPU support. Some models may have specific hardware requirements, such as f16 support.

Health Check
Last commit

1 week ago

Responsiveness

1 day

Pull Requests (30d)
1
Issues (30d)
0
Star History
115 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.