XAI paper collection for understanding/interpreting/visualizing ML models
Top 56.0% on sourcepulse
This repository serves as a curated bibliography of academic papers on Explainable Artificial Intelligence (XAI), targeting researchers and practitioners in machine learning and AI ethics. It aims to consolidate key efforts in understanding, interpreting, and visualizing pre-trained ML models, providing a structured overview of the field.
How It Works
The repository categorizes XAI research into several key areas: explaining model inner-workings (e.g., feature visualization, network inversion), explaining model decisions (e.g., attribution maps, attention mechanisms, perturbation-based methods), learning to explain (e.g., training models to be interpretable), counterfactual explanations, and real-world applications of XAI. It links to papers, code, and sometimes demos for each technique.
Quick Start & Requirements
This is a bibliography, not a software library. No installation or execution is required. All requirements are met by having access to the linked papers and potentially the associated code repositories.
Highlighted Details
Maintenance & Community
The repository appears to be a personal collection, with the last update indicated by the inclusion of papers up to 2023. There are no explicit community channels or contributor lists provided in the README.
Licensing & Compatibility
The repository itself is a collection of links to academic papers and their associated code. The licensing and compatibility of the underlying research and code depend entirely on the original sources.
Limitations & Caveats
This is a bibliography and does not provide any executable code or tools. The organization is based on the structure of the XAI field, which is rapidly evolving. Some links may become outdated, and the coverage, while extensive, may not be exhaustive.
2 years ago
Inactive