Explainability toolbox for neural networks
Top 50.0% on sourcepulse
Xplique is a Python toolkit for neural network explainability, offering a comprehensive suite of tools for understanding complex models. It targets researchers and practitioners in AI, providing methods to analyze model behavior, identify influential features, and extract human-understandable concepts.
How It Works
Xplique is structured into four core modules: Attributions Methods, Feature Visualization, Concepts, and Metrics. It implements state-of-the-art techniques for generating feature attributions (e.g., Grad-CAM, Integrated Gradients), visualizing internal representations, extracting human-interpretable concepts (e.g., CAV, CRAFT), and evaluating explanation quality with various metrics. The library supports both TensorFlow and PyTorch models, with a dedicated wrapper for PyTorch integration.
Quick Start & Requirements
pip install xplique
Highlighted Details
Maintenance & Community
The project is actively developed by the DEEL project team, with contributions welcomed. Further information and community links are available in the README.
Licensing & Compatibility
Released under the MIT license, allowing for commercial use and integration with closed-source projects.
Limitations & Caveats
Some methods may not function as expected with Keras 3.X (TensorFlow 2.16+); TensorFlow 2.15 or earlier is recommended for optimal compatibility. The "Example-based" module is in its early stages.
9 months ago
1+ week