Curated list of resources for interpretable ML
Top 40.6% on sourcepulse
This repository is an "awesome list" curating resources on interpretable machine learning, covering techniques for model introspection, simplification, visualization, and explanation. It targets researchers, engineers, and practitioners seeking to understand and explain complex ML models, offering a comprehensive overview of foundational concepts, algorithms, and tools.
How It Works
The list categorizes resources into key areas: Interpretable Models (e.g., decision trees, linear models), Feature Importance (e.g., Random Forest, SHAP), Feature Selection, Model Explanations (both model-agnostic and model-specific, particularly for neural networks), Extracting Interpretable Models, and Model Visualization. It primarily links to academic papers, code repositories, and relevant tutorials, providing a structured knowledge base.
Highlighted Details
Maintenance & Community
The list is maintained by lopusz and includes contributions and references to prominent researchers and projects in the field, such as Christoph Molnar, Cynthia Rudin, and packages like SHAP and LIME. Links to related "awesome" lists and community resources are provided.
Licensing & Compatibility
The repository itself is typically licensed under permissive terms (e.g., MIT, as is common for "awesome" lists), but the linked resources (papers, software) will have their own respective licenses. Compatibility for commercial use depends on the licenses of the individual linked software and research papers.
Limitations & Caveats
This is a curated list of resources, not a software library itself. Users must consult the individual linked papers and software projects for specific installation, usage, and licensing details. The content reflects the state of research and tools at the time of its last update.
2 years ago
1+ week