DALEX  by ModelOriented

XAI/Interpretable ML SDK for model exploration and explanation

created 7 years ago
1,433 stars

Top 29.1% on sourcepulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

DALEX provides a model-agnostic framework for exploring and explaining the behavior of complex predictive models, targeting data scientists and ML engineers who need to understand and validate their black-box models. It aims to increase trust and adoption of ML by offering tools for local and global model explanations.

How It Works

DALEX wraps any predictive model, creating an "explainer" object. This object can then be used with various local and global explainers, such as SHAP or LIME, to analyze variable importance, partial dependence, and individual prediction explanations. This approach allows users to apply consistent explanation methods across diverse modeling libraries.

Quick Start & Requirements

Highlighted Details

  • Model-agnostic: Works with various ML frameworks including scikit-learn, Keras, H2O, tidymodels, XGBoost, mlr, and mlr3 via the DALEXtra extension.
  • Comprehensive explanation methods: Supports techniques like SHAP, LIME, Partial Dependence Plots, and more.
  • Focus on Responsible AI: Includes modules for fairness analysis and supports the "three requirements" for predictive models (justification, speculation, validation).
  • Interactive dashboard: Offers an "Arena" for interactive model exploration.

Maintenance & Community

The project is actively maintained, with significant contributions and academic backing. It is part of the DrWhy.AI universe. Links to community resources like Discord/Slack are not explicitly provided in the README.

Licensing & Compatibility

The R package is available on CRAN, and the Python package on PyPI and conda-forge. The project cites two JMLR papers, indicating academic rigor. Licensing details are not explicitly stated but are typically permissive for CRAN/PyPI packages, allowing commercial use.

Limitations & Caveats

While model-agnostic, the effectiveness and computational cost of explanations can vary significantly depending on the underlying model's complexity and the chosen explainer. The README emphasizes the "approximated" nature of some explainers.

Health Check
Last commit

6 days ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
17 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Elie Bursztein Elie Bursztein(Cybersecurity Lead at Google DeepMind), and
1 more.

alibi by SeldonIO

0.1%
3k
Python library for ML model inspection and interpretation
created 6 years ago
updated 1 month ago
Feedback? Help us improve.