PyTorch SDK for explainable AI in computer vision
Top 4.3% on sourcepulse
This library provides advanced AI explainability methods for PyTorch computer vision models, targeting researchers and developers who need to understand model predictions. It offers a comprehensive suite of state-of-the-art techniques for diagnosing model behavior, benchmarking new methods, and visualizing feature attributions.
How It Works
The package implements a variety of pixel attribution methods, including GradCAM, HiResCAM, ScoreCAM, and AblationCAM, by weighting model activations based on gradients or output perturbations. It supports advanced use cases like object detection and semantic segmentation by allowing custom reshape_transform
and model_target
functions to adapt to diverse architectures and tasks beyond standard classification.
Quick Start & Requirements
pip install grad-cam
Highlighted Details
Maintenance & Community
The project is actively maintained by Jacob Gildenblat and contributors. Community support channels are not explicitly mentioned in the README.
Licensing & Compatibility
The project is licensed under the MIT License, permitting commercial use and integration with closed-source projects.
Limitations & Caveats
Some methods like ScoreCAM and AblationCAM can be computationally intensive due to multiple forward passes, with batch size configurable for performance tuning. Adapting to highly custom architectures may require understanding the reshape_transform
and model_target
concepts.
3 months ago
Inactive