Browser-based Stable Diffusion demo with no server support
Top 13.5% on sourcepulse
This project enables Stable Diffusion image generation directly within web browsers, eliminating the need for server-side infrastructure. It targets web developers and users seeking privacy-preserving, cost-effective AI image generation capabilities, leveraging client-side hardware for computation.
How It Works
The project utilizes Apache TVM Unity, a machine learning compilation framework, to convert Stable Diffusion models (specifically Runway's v1-5) into a WebAssembly runtime. It employs TorchDynamo and Torch FX for model capture, TVM's TensorIR and MetaSchedule for shader optimization, and Emscripten for WebAssembly compilation. This approach allows models to run natively on client GPUs via WebGPU, with optimized shaders and static memory planning for efficient browser execution.
Quick Start & Requirements
pip3 install mlc-ai-nightly -f https://mlc.ai/wheels
or build from source.wasm-pack
, Jekyll, Chrome Canary.python3 build.py
(for local GPU) or python3 build.py --target webgpu
(for WebGPU)../scripts/local_deploy_site.sh
for web deployment.localhost:8888
in Chrome Canary with specific flags.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
--enable-dawn-features=disable_robustness
) for optimal performance.1 year ago
Inactive