XAI framework for debugging and explaining ML models
Top 69.1% on sourcepulse
ExplainX is an open-source Python framework designed to provide explainable AI (XAI) capabilities for data scientists and business users. It aims to simplify the process of understanding, debugging, and validating machine learning models by offering a unified interface for various interpretability techniques, enabling better trust and deployment of AI solutions.
How It Works
ExplainX integrates multiple XAI techniques, including SHAP (Kernel and Tree Explainer), Partial Dependence Plots, and What-If Scenario Analysis, into a single framework. Users can apply these techniques with a single line of code after model training. The framework automatically detects model and problem types (classification/regression), streamlining the analysis process and generating interactive dashboards for easy sharing and comprehension of model behavior.
Quick Start & Requirements
pip install explainx
Highlighted Details
Maintenance & Community
The project is actively seeking co-authors to further development. Contact is available via ms8909@nyu.edu. Contribution guidelines are provided, encouraging forking and pull requests, with a process for discussing major changes.
Licensing & Compatibility
Licensed under the MIT License, permitting commercial use and integration with closed-source projects.
Limitations & Caveats
TensorFlow and PyTorch model support are listed as "Coming Soon." Surrogate Decision Trees and Anchors are also planned future additions. The project is seeking contributors, which may indicate a smaller core development team.
11 months ago
1 day