CLI tool for Stable Diffusion inference acceleration
Top 68.2% on sourcepulse
Agile Diffusers Inference (ADI) is a C++ library and CLI tool designed to accelerate Stable Diffusion inference by leveraging ONNXRuntime. It targets engineers and researchers needing efficient, cross-platform deployment of diffusion models, offering a smaller package size and high performance compared to Python-based solutions.
How It Works
ADI utilizes ONNXRuntime as its core inference engine, benefiting from its open-source nature, scalability, high performance, and broad cross-platform support (CPU, GPU, TPU). Models are converted to the .onnx
format, enabling efficient execution across various hardware. The library provides a C++ framework for direct integration and a CLI for straightforward usage.
Quick Start & Requirements
brew tap windsander/adi-stable-diffusion && brew install adi
.nupkg
from releases and install via choco install adi.1.0.1.nupkg -y
../auto_build.sh
with platform-specific parameters. Advanced options allow enabling CUDA, TensorRT, and custom compiler configurations.Highlighted Details
Maintenance & Community
The project is maintained by Windsander. Community channels are not explicitly mentioned in the README.
Licensing & Compatibility
The project is licensed under the MIT License, permitting commercial use and integration with closed-source applications.
Limitations & Caveats
Support for SD v2.x, v3.x, SDXL, and SVD is listed as under development or experimental. Some scheduler implementations are marked as tested, implying others may not be fully validated. The README suggests manual preparation of ONNX models and converters.
11 months ago
Inactive