Local Stable Diffusion inference on Apple devices
Top 44.4% on sourcepulse
Maple Diffusion enables local Stable Diffusion inference on iOS and macOS using Swift and Apple's MPSGraph framework, bypassing Python dependencies. It targets developers and users seeking on-device AI image generation, offering faster inference than CoreML alternatives by leveraging operator fusion and FP16 tensors.
How It Works
The project utilizes MPSGraph for efficient execution of Stable Diffusion models directly on Apple Silicon hardware. It employs FP16 (NHWC) tensors and operator fusion to optimize performance and manage memory constraints, particularly on iOS devices with limited RAM. Model weights are converted to binary blobs for direct loading within the Swift application.
Quick Start & Requirements
conda create -n maple-diffusion python=3.10
, conda activate maple-diffusion
), install Python dependencies (pip install torch typing_extensions numpy Pillow requests pytorch_lightning
), convert PyTorch models (./maple-convert.py <path_to_model>
), open the Xcode project, select a device, add the "Increased Memory Limit" capability, and build/run.Highlighted Details
Maintenance & Community
The project is maintained by Ollin. Related projects like Native Diffusion offer further improvements and Swift Package integration.
Licensing & Compatibility
The repository appears to be under the MIT License, allowing for commercial use and integration with closed-source applications.
Limitations & Caveats
Requires specific Xcode and iOS versions (Xcode 14, iOS 16). iPhone devices may require manual configuration of the "Increased Memory Limit" capability to function correctly. Older iOS versions or devices with less than 6GB RAM may not be supported.
2 years ago
1 day