distiller  by IntelLabs

Neural network compression research toolkit

Created 7 years ago
4,401 stars

Top 11.1% on SourcePulse

GitHubView on GitHub
Project Summary

Summary

IntelLabs/distiller is a Python package designed for neural network compression research, offering tools for sparsity, quantization, and knowledge distillation. It targets researchers and engineers seeking to reduce model size, improve inference speed, and lower energy consumption in deep learning models. The package provides a PyTorch environment for prototyping and analyzing various compression algorithms.

How It Works

Distiller facilitates network compression through a flexible PyTorch framework. It supports diverse techniques including element-wise and structured weight pruning (e.g., kernel-wise, filter-wise, channel-wise), automatic model compression (AMC) via sensitivity analysis, and various quantization methods (post-training, quantization-aware) with customizable bit-widths. The library also integrates knowledge distillation and allows for flexible scheduling of compression tasks, with configurations defined in YAML files.

Quick Start & Requirements

  • Installation: Clone the repository (git clone https://github.com/IntelLabs/distiller.git), create and activate a Python virtual environment (python3 -m venv env, source env/bin/activate), then install in development mode (cd distiller, pip3 install -e .).
  • Prerequisites: Python 3.5, PyTorch 1.3.1, TorchVision 0.4.2. Tested on Ubuntu 16.04 LTS. GPU usage may require code adjustments if not using CUDA 10.1.
  • Documentation: https://intellabs.github.io/distiller

Highlighted Details

  • Supports automated model compression (AMC) and flexible compression scheduling via YAML.
  • Implements a wide range of pruning techniques, including structured pruning for convolutions and fully-connected layers.
  • Offers both post-training quantization and quantization-aware training capabilities.
  • Includes sample Jupyter notebooks for experiment planning and results analysis, demonstrating techniques like sensitivity analysis and performance graphing.

Maintenance & Community

This project will no longer be maintained by Intel and has been identified as having known security escapes. Intel has ceased all development, maintenance, bug fixes, and contributions. The project is effectively discontinued.

Licensing & Compatibility

  • License: Apache License 2.0.
  • Compatibility: While the license is permissive for commercial use, the project's discontinuation and security vulnerabilities render it unsuitable for production environments or any sensitive applications. The tested environment (Ubuntu 16.04, Python 3.5, PyTorch 1.3.1) is significantly outdated.

Limitations & Caveats

The primary limitation is the project's discontinuation by Intel due to identified security escapes, rendering it unsupported and potentially unsafe. Furthermore, the tested environment is outdated, and users may face compatibility issues with modern hardware and software stacks.

Health Check
Last Commit

2 years ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
5 stars in the last 30 days

Explore Similar Projects

Starred by Shengjia Zhao Shengjia Zhao(Chief Scientist at Meta Superintelligence Lab), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
14 more.

BIG-bench by google

0.1%
3k
Collaborative benchmark for probing and extrapolating LLM capabilities
Created 4 years ago
Updated 1 year ago
Starred by Aravind Srinivas Aravind Srinivas(Cofounder of Perplexity), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
16 more.

text-to-text-transfer-transformer by google-research

0.1%
6k
Unified text-to-text transformer for NLP research
Created 6 years ago
Updated 5 months ago
Feedback? Help us improve.