ML interpretability Python package for glassbox models and blackbox explanations
Top 7.8% on sourcepulse
InterpretML is an open-source Python package designed to provide a unified framework for machine learning interpretability. It enables users to train inherently interpretable "glassbox" models and explain complex "blackbox" models, addressing needs in model debugging, feature engineering, fairness assessment, and regulatory compliance. The primary audience includes data scientists and researchers working with high-risk applications where understanding model behavior is critical.
How It Works
The core of InterpretML is the Explainable Boosting Machine (EBM), a novel approach that combines modern machine learning techniques like bagging and gradient boosting with traditional Generalized Additive Models (GAMs). This hybrid methodology allows EBMs to achieve accuracy comparable to state-of-the-art blackbox models (e.g., Random Forests, XGBoost) while providing exact, human-editable explanations. InterpretML also supports other interpretable models and blackbox explanation techniques like SHAP and LIME.
Quick Start & Requirements
pip install interpret
or conda install -c conda-forge interpret
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
6 days ago
1 week