AI explainability toolkit for data and ML models
Top 25.4% on sourcepulse
AI Explainability 360 (AIX360) is an open-source Python toolkit designed to provide a comprehensive suite of algorithms for interpreting and explaining machine learning models and datasets across tabular, text, image, and time-series data. It caters to researchers and data scientists by offering a structured approach to understanding various explainability techniques and their applicability.
How It Works
AIX360 implements a wide array of explainability algorithms, categorized by their approach (data vs. model, direct vs. post-hoc, local vs. global) and data type. It includes methods like ProtoDash, LIME, SHAP, CEM, and various time-series specific adaptations, alongside proxy explainability metrics such as faithfulness and monotonicity. The toolkit is built with extensibility in mind, allowing users to contribute new algorithms and use cases.
Quick Start & Requirements
conda
for environment management. Install specific algorithm dependencies using pip install -e .[<algo1>,<algo2>,...]
after cloning the repository, or pip install -e git+https://github.com/Trusted-AI/AIX360.git#egg=aix360[<algo1>,<algo2>,...]
directly.cmake
may be required for some installations. Docker support is available.Highlighted Details
Maintenance & Community
The project encourages community contributions via Slack. Key contributors and authors are listed in the paper citation.
Licensing & Compatibility
The project's licensing details are available in the LICENSE
file and the supplementary license
folder. Compatibility for commercial use or closed-source linking is not explicitly detailed but is generally permissive for open-source projects.
Limitations & Caveats
The library is noted as still being in development. Some algorithms require specific Python versions (e.g., 3.6 for contrastive and SHAP, 3.10 for most others), potentially necessitating multiple environments or careful dependency management.
5 months ago
Inactive