PyTorch library for model interpretability research
Top 9.6% on sourcepulse
Captum is a PyTorch library for model interpretability, offering a suite of algorithms to understand feature importance, neuron contributions, and model behavior. It targets ML researchers and developers seeking to debug, improve, and explain their models, providing insights into what drives predictions.
How It Works
Captum implements various attribution algorithms, including Integrated Gradients, DeepLift, and GradientShap, which attribute model predictions to input features or internal model components. It leverages PyTorch's autograd capabilities to efficiently compute these attributions, often using baseline inputs or noise injection for robustness and accuracy. The library supports extensions for layer and neuron-level analysis, enabling deeper inspection of model internals.
Quick Start & Requirements
pip install captum
conda install captum -c pytorch
or conda install captum -c conda-forge
pip install -e .[dev]
or pip install -e .[tutorials]
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
1 week ago
1 day