xai  by EthicalML

Explainability toolbox for ML models

created 6 years ago
1,194 stars

Top 33.5% on sourcepulse

GitHubView on GitHub
Project Summary

This library provides a comprehensive toolkit for machine learning explainability, targeting ML engineers and domain experts. It aims to empower users to analyze and evaluate data and models, identify biases, and ensure responsible AI development by implementing core principles of explainable ML.

How It Works

The XAI library follows a three-step process for explainable machine learning: data analysis, model evaluation, and production monitoring. It offers tools for identifying data imbalances, balancing classes via upsampling/downsampling, visualizing correlations, and performing balanced train-test splits. For model evaluation, it supports feature importance analysis, metric imbalance visualization across various data slices, confusion matrix plotting, and ROC curve analysis, including group-wise comparisons.

Quick Start & Requirements

  • Install via pip: pip install xai
  • Example Jupyter Notebook available in the Examples folder.
  • Requires Python. Specific version not stated.

Highlighted Details

  • Tools for identifying and mitigating data imbalances.
  • Visualizations for correlations, feature importance, and metric performance.
  • Group-wise analysis of model performance against protected attributes.
  • Functions for balancing datasets and performing stratified splits.

Maintenance & Community

  • Maintained by The Institute for Ethical AI & ML.
  • Developed based on 8 principles for Responsible Machine Learning.
  • Documentation available at https://ethicalml.github.io/xai/index.html.
  • Community list "Awesome Machine Learning Production & Operations" mentioned.

Licensing & Compatibility

  • License type not explicitly stated in the README.

Limitations & Caveats

The README does not specify the exact Python version requirements or provide explicit licensing information, which may require further investigation for commercial use or integration into closed-source projects.

Health Check
Last commit

3 years ago

Responsiveness

1+ week

Pull Requests (30d)
0
Issues (30d)
0
Star History
30 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Elie Bursztein Elie Bursztein(Cybersecurity Lead at Google DeepMind), and
1 more.

alibi by SeldonIO

0.1%
3k
Python library for ML model inspection and interpretation
created 6 years ago
updated 1 month ago
Feedback? Help us improve.