DALEX  by ModelOriented

XAI/Interpretable ML SDK for model exploration and explanation

Created 7 years ago
1,440 stars

Top 28.4% on SourcePulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

DALEX provides a model-agnostic framework for exploring and explaining the behavior of complex predictive models, targeting data scientists and ML engineers who need to understand and validate their black-box models. It aims to increase trust and adoption of ML by offering tools for local and global model explanations.

How It Works

DALEX wraps any predictive model, creating an "explainer" object. This object can then be used with various local and global explainers, such as SHAP or LIME, to analyze variable importance, partial dependence, and individual prediction explanations. This approach allows users to apply consistent explanation methods across diverse modeling libraries.

Quick Start & Requirements

Highlighted Details

  • Model-agnostic: Works with various ML frameworks including scikit-learn, Keras, H2O, tidymodels, XGBoost, mlr, and mlr3 via the DALEXtra extension.
  • Comprehensive explanation methods: Supports techniques like SHAP, LIME, Partial Dependence Plots, and more.
  • Focus on Responsible AI: Includes modules for fairness analysis and supports the "three requirements" for predictive models (justification, speculation, validation).
  • Interactive dashboard: Offers an "Arena" for interactive model exploration.

Maintenance & Community

The project is actively maintained, with significant contributions and academic backing. It is part of the DrWhy.AI universe. Links to community resources like Discord/Slack are not explicitly provided in the README.

Licensing & Compatibility

The R package is available on CRAN, and the Python package on PyPI and conda-forge. The project cites two JMLR papers, indicating academic rigor. Licensing details are not explicitly stated but are typically permissive for CRAN/PyPI packages, allowing commercial use.

Limitations & Caveats

While model-agnostic, the effectiveness and computational cost of explanations can vary significantly depending on the underlying model's complexity and the chosen explainer. The README emphasizes the "approximated" nature of some explainers.

Health Check
Last Commit

1 month ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
6 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Chaoyu Yang Chaoyu Yang(Founder of Bento), and
1 more.

OmniXAI by salesforce

0%
949
Python library for explainable AI (XAI)
Created 3 years ago
Updated 1 year ago
Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Travis Addair Travis Addair(Cofounder of Predibase), and
4 more.

alibi by SeldonIO

0.1%
3k
Python library for ML model inspection and interpretation
Created 6 years ago
Updated 15 hours ago
Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Gabriel Almeida Gabriel Almeida(Cofounder of Langflow), and
5 more.

lit by PAIR-code

0.1%
4k
Interactive ML model analysis tool for understanding model behavior
Created 5 years ago
Updated 3 weeks ago
Feedback? Help us improve.