AIX360  by Trusted-AI

AI explainability toolkit for data and ML models

Created 6 years ago
1,751 stars

Top 24.3% on SourcePulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

AI Explainability 360 (AIX360) is an open-source Python toolkit designed to provide a comprehensive suite of algorithms for interpreting and explaining machine learning models and datasets across tabular, text, image, and time-series data. It caters to researchers and data scientists by offering a structured approach to understanding various explainability techniques and their applicability.

How It Works

AIX360 implements a wide array of explainability algorithms, categorized by their approach (data vs. model, direct vs. post-hoc, local vs. global) and data type. It includes methods like ProtoDash, LIME, SHAP, CEM, and various time-series specific adaptations, alongside proxy explainability metrics such as faithfulness and monotonicity. The toolkit is built with extensibility in mind, allowing users to contribute new algorithms and use cases.

Quick Start & Requirements

  • Installation: Recommended via conda for environment management. Install specific algorithm dependencies using pip install -e .[<algo1>,<algo2>,...] after cloning the repository, or pip install -e git+https://github.com/Trusted-AI/AIX360.git#egg=aix360[<algo1>,<algo2>,...] directly.
  • Prerequisites: Python 3.6-3.10 (depending on the specific algorithm configuration). cmake may be required for some installations. Docker support is available.
  • Resources: Requires downloading datasets separately.
  • Documentation: Tutorials and example notebooks are available within the repository.

Highlighted Details

  • Supports a broad spectrum of explainability algorithms, including novel methods like CoFrNets and Ecertify.
  • Offers guidance and a taxonomy tree to help users select appropriate explanation techniques.
  • Includes interactive experiences and tutorials for different user personas.
  • Designed for extensibility, encouraging community contributions.

Maintenance & Community

The project encourages community contributions via Slack. Key contributors and authors are listed in the paper citation.

Licensing & Compatibility

The project's licensing details are available in the LICENSE file and the supplementary license folder. Compatibility for commercial use or closed-source linking is not explicitly detailed but is generally permissive for open-source projects.

Limitations & Caveats

The library is noted as still being in development. Some algorithms require specific Python versions (e.g., 3.6 for contrastive and SHAP, 3.10 for most others), potentially necessitating multiple environments or careful dependency management.

Health Check
Last Commit

10 months ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
4 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Chaoyu Yang Chaoyu Yang(Founder of Bento), and
1 more.

OmniXAI by salesforce

0%
960
Python library for explainable AI (XAI)
Created 3 years ago
Updated 1 year ago
Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Travis Addair Travis Addair(Cofounder of Predibase), and
4 more.

alibi by SeldonIO

0.0%
3k
Python library for ML model inspection and interpretation
Created 6 years ago
Updated 2 months ago
Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Gabriel Almeida Gabriel Almeida(Cofounder of Langflow), and
5 more.

lit by PAIR-code

0.2%
4k
Interactive ML model analysis tool for understanding model behavior
Created 5 years ago
Updated 1 month ago
Feedback? Help us improve.