XAI resource list for explainable AI/ML research
Top 27.6% on sourcepulse
This repository is a curated collection of research materials on Explainable Artificial Intelligence (XAI), targeting researchers and practitioners in the field. It aims to organize and provide access to frontier publications, surveys, benchmarks, and tools related to making AI models more interpretable and trustworthy.
How It Works
The project categorizes XAI publications based on common taxonomies, including transparent model design, post-hoc explanations (model-level, model inspection, outcome explanation, neuron importance, example-based explanations like counterfactuals, influential instances, prototypes), and evaluation methods. It also lists numerous Python libraries and related repositories for XAI implementation and research.
Quick Start & Requirements
This is a curated list of resources, not a runnable software package. No installation or execution commands are provided. Users will need to access and potentially download individual papers or explore the linked GitHub repositories for specific tools.
Highlighted Details
Maintenance & Community
The repository is maintained by wangyongjie-ntu. The README encourages community contributions for re-organizing, refining the taxonomy, and adding new XAI works. Contact information for the maintainer is provided for discussion and contributions.
Licensing & Compatibility
The repository itself is a collection of links and information; it does not appear to have a specific license. The licensing of individual linked papers and software libraries would need to be checked separately.
Limitations & Caveats
As a curated list, the content's comprehensiveness and up-to-dateness depend on community contributions. The organization of papers is based on existing surveys, and the README explicitly requests help in refining the taxonomy.
2 weeks ago
1 day