AIX360  by Trusted-AI

AI explainability toolkit for data and ML models

created 6 years ago
1,713 stars

Top 25.4% on sourcepulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

AI Explainability 360 (AIX360) is an open-source Python toolkit designed to provide a comprehensive suite of algorithms for interpreting and explaining machine learning models and datasets across tabular, text, image, and time-series data. It caters to researchers and data scientists by offering a structured approach to understanding various explainability techniques and their applicability.

How It Works

AIX360 implements a wide array of explainability algorithms, categorized by their approach (data vs. model, direct vs. post-hoc, local vs. global) and data type. It includes methods like ProtoDash, LIME, SHAP, CEM, and various time-series specific adaptations, alongside proxy explainability metrics such as faithfulness and monotonicity. The toolkit is built with extensibility in mind, allowing users to contribute new algorithms and use cases.

Quick Start & Requirements

  • Installation: Recommended via conda for environment management. Install specific algorithm dependencies using pip install -e .[<algo1>,<algo2>,...] after cloning the repository, or pip install -e git+https://github.com/Trusted-AI/AIX360.git#egg=aix360[<algo1>,<algo2>,...] directly.
  • Prerequisites: Python 3.6-3.10 (depending on the specific algorithm configuration). cmake may be required for some installations. Docker support is available.
  • Resources: Requires downloading datasets separately.
  • Documentation: Tutorials and example notebooks are available within the repository.

Highlighted Details

  • Supports a broad spectrum of explainability algorithms, including novel methods like CoFrNets and Ecertify.
  • Offers guidance and a taxonomy tree to help users select appropriate explanation techniques.
  • Includes interactive experiences and tutorials for different user personas.
  • Designed for extensibility, encouraging community contributions.

Maintenance & Community

The project encourages community contributions via Slack. Key contributors and authors are listed in the paper citation.

Licensing & Compatibility

The project's licensing details are available in the LICENSE file and the supplementary license folder. Compatibility for commercial use or closed-source linking is not explicitly detailed but is generally permissive for open-source projects.

Limitations & Caveats

The library is noted as still being in development. Some algorithms require specific Python versions (e.g., 3.6 for contrastive and SHAP, 3.10 for most others), potentially necessitating multiple environments or careful dependency management.

Health Check
Last commit

5 months ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
1
Star History
33 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Elie Bursztein Elie Bursztein(Cybersecurity Lead at Google DeepMind), and
1 more.

alibi by SeldonIO

0.1%
3k
Python library for ML model inspection and interpretation
created 6 years ago
updated 1 month ago
Feedback? Help us improve.