Framework for dynamic Stable Diffusion Mixture of Experts, no training needed
Top 69.4% on sourcepulse
SegMoE provides a framework for dynamically combining Stable Diffusion models into a Mixture of Experts (MoE) without retraining. This allows users to create larger models with enhanced knowledge, better prompt adherence, and improved image quality, targeting users who want to leverage multiple fine-tuned models efficiently.
How It Works
SegMoE dynamically merges Stable Diffusion models by mixing specific layers (feedforward, attention, or all) based on prompt-derived gate weights. This approach allows for the creation of larger, more capable models on-the-fly by leveraging the distinct strengths of individual fine-tuned models, inspired by similar techniques in large language models.
Quick Start & Requirements
pip install segmoe
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The framework is not yet optimized for speed or memory usage. While it improves image fidelity and adherence, it does not surpass the performance of a single expert without further training.
1 year ago
1 day