Resource for neural network pruning papers and code
Top 96.9% on sourcepulse
This repository is a curated collection of academic papers and open-source code related to neural network pruning, serving as a comprehensive resource for researchers and practitioners in model compression. It aims to provide an up-to-date overview of the field, categorizing pruning techniques by when they are applied, how they are learned, and their application domains, with a particular focus on recent advancements in pruning for large language models and vision transformers.
How It Works
The project organizes pruning literature and code based on a detailed taxonomy, classifying methods by their application timing (before, during, or after training), learning paradigms (e.g., continual, contrastive, federated learning), and specific model architectures (CNNs, ViTs, BERTs, LLMs, Diffusion Models). It meticulously lists papers with their venues, types, algorithm names, and code availability, facilitating a structured understanding of the research landscape.
Quick Start & Requirements
This repository is a curated list of papers and code; it does not have a direct installation or execution command. Users will need to follow individual paper links to access and utilize the associated codebases, which typically require Python and deep learning frameworks like PyTorch or TensorFlow.
Highlighted Details
Maintenance & Community
The repository is maintained by hrcheng1066 and appears to be actively updated, as indicated by the inclusion of 2024 publications. It cites other relevant "awesome" lists in the field, suggesting community engagement.
Licensing & Compatibility
The repository itself is a collection of links and information; it does not host code directly, thus its licensing is primarily governed by the licenses of the individual projects it links to. Compatibility for commercial use depends on the licenses of those linked projects.
Limitations & Caveats
As a curated list, the repository does not provide a unified framework or tool for pruning. Users must individually assess and integrate the code from linked papers, which may vary in quality, documentation, and framework compatibility.
11 months ago
Inactive