DiCE  by interpretml

Counterfactual explanation SDK for ML models

created 6 years ago
1,430 stars

Top 29.1% on sourcepulse

GitHubView on GitHub
Project Summary

DiCE provides diverse counterfactual explanations for any machine learning model, enabling users to understand "what-if" scenarios and actionable recourse. It is designed for ML model developers, decision subjects, and decision-makers seeking interpretable and truthful explanations.

How It Works

DiCE frames counterfactual explanation generation as an optimization problem, similar to finding adversarial examples. It aims to find feature-perturbed versions of an input that alter the model's prediction while ensuring diversity and feasibility of the changes. The library supports tunable parameters for diversity and proximity, as well as constraints on feature modifications to ensure practical relevance.

Quick Start & Requirements

  • Install via pip: pip install dice-ml or conda install -c conda-forge dice-ml.
  • For the latest version, clone the repo and run pip install -e ..
  • Requires Python 3+.
  • Supports TensorFlow 2, scikit-learn, and PyTorch models.
  • Example notebooks and documentation are available: Docs, Example Notebooks.

Highlighted Details

  • Generates diverse sets of counterfactual explanations.
  • Supports feature constraints (e.g., features_to_vary, permitted_range).
  • Allows tuning of proximity and diversity weights.
  • Can work with pre-trained models without requiring the original training data.
  • Offers model-agnostic and gradient-based explanation methods.

Maintenance & Community

  • Active development with contributions from Microsoft Research.
  • Community engagement via GitHub issues and pull requests.
  • Follows the Microsoft Open Source Code of Conduct.

Licensing & Compatibility

  • MIT License.
  • Permissive for commercial use and integration with closed-source applications.

Limitations & Caveats

  • Currently lacks support for generating explanations in natural language.
  • Planned features include evaluating feature attribution methods and better feasibility constraints.
Health Check
Last commit

2 weeks ago

Responsiveness

1 day

Pull Requests (30d)
3
Issues (30d)
0
Star History
38 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Elie Bursztein Elie Bursztein(Cybersecurity Lead at Google DeepMind), and
1 more.

alibi by SeldonIO

0.1%
3k
Python library for ML model inspection and interpretation
created 6 years ago
updated 1 month ago
Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Jeff Hammerbacher Jeff Hammerbacher(Cofounder of Cloudera), and
1 more.

lit by PAIR-code

0.0%
4k
Interactive ML model analysis tool for understanding model behavior
created 5 years ago
updated 5 days ago
Feedback? Help us improve.