SDK for on-device image generation on Apple Silicon
Top 53.0% on sourcepulse
DiffusionKit enables on-device image generation for Apple Silicon, targeting developers and researchers who want to leverage diffusion models locally. It provides tools to convert PyTorch models to Core ML format and perform inference using MLX, offering a streamlined path for integrating advanced image generation capabilities into macOS and iOS applications.
How It Works
DiffusionKit facilitates the conversion of PyTorch diffusion models (like Stable Diffusion 3 and FLUX) into Apple's Core ML format, optimizing them for efficient on-device execution. It then utilizes the MLX framework, designed for machine learning on Apple hardware, to perform the actual image generation inference. This approach leverages Apple's specialized hardware (Neural Engine) for accelerated performance and privacy-preserving local processing.
Quick Start & Requirements
pip install -e .
within a Python 3.11 conda environment..mlpackage
format.diffusionkit-cli --prompt "a photo of a cat" --output-path <output_path>
DiffusionPipeline
or FluxPipeline
from diffusionkit.mlx
.ml-stable-diffusion
for Core ML backend, with ongoing development for a holistic Swift package.Highlighted Details
.mlpackage
).Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
3 months ago
1 day