PyTorch library for knowledge distillation research
Top 42.3% on sourcepulse
This library provides a PyTorch framework for implementing various knowledge distillation techniques, targeting researchers and practitioners in computer vision. It offers official implementations of "Decoupled Knowledge Distillation" (CVPR2022) and "DOT: A Distillation-Oriented Trainer" (ICCV2023), enabling significant performance improvements on standard benchmarks like CIFAR-100 and ImageNet.
How It Works
MDistiller supports a wide array of distillation methods, including KD, FitNet, AT, NST, PKT, KDSVD, OFD, RKD, VID, SP, CRD, ReviewKD, and DKD. The framework is designed to be extensible, allowing users to easily add custom distillation algorithms by defining new distiller classes and registering them within the library. It leverages PyTorch for model implementation and training, with optional integration of Weights & Biases for logging.
Quick Start & Requirements
sudo pip3 install -r requirements.txt
followed by sudo python3 setup.py develop
.Highlighted Details
Maintenance & Community
The project is associated with Megvii Research. It builds upon existing codebases from CRD and ReviewKD.
Licensing & Compatibility
Released under the MIT license, permitting commercial use and integration with closed-source projects.
Limitations & Caveats
The specified environment requires older versions of PyTorch (1.9.0) and torchvision (0.10.0), which may pose compatibility challenges with newer projects or hardware.
1 year ago
1 week