This repository curates trustworthy deep learning research papers, focusing on areas like OOD generalization, adversarial attacks, privacy, fairness, and interpretability. It serves researchers and practitioners seeking to build reliable and secure AI systems, offering a daily updated list of arXiv publications and a comprehensive collection of related resources.
How It Works
The list is organized by topic, with each entry providing a paper title, authors, publication venue, keywords, and a concise digest of its findings. It aims to cover emerging research areas in trustworthy AI, with a daily update mechanism for new arXiv submissions. The structure facilitates quick scanning and in-depth understanding of specific research directions.
Quick Start & Requirements
- Access to the full list via the provided link: [Full List](open_file_folder: [ Full List ]).
- No specific software installation required; it's a curated list of research papers.
Highlighted Details
- Covers a broad spectrum of trustworthy AI topics including OOD generalization, adversarial attacks/defenses, privacy, fairness, and interpretability.
- Includes links to related "Awesome" lists, toolboxes, seminars, workshops, tutorials, talks, blogs, and other resources for deeper engagement.
- Features digests that summarize key findings and contributions of each paper.
- Daily updates from arXiv ensure the list remains current with the latest research.
Maintenance & Community
- Maintained by MinghuiChen43.
- Open to contributions via issue submission or direct contact.
- Formatting guidelines are provided for new submissions.
Licensing & Compatibility
- The repository itself is likely under a permissive license (e.g., MIT, Apache 2.0) given its nature as a curated list. Specific paper licenses are not detailed.
- Compatible with any system for accessing research papers.
Limitations & Caveats
- The "preview README" only includes papers from the last year; the full list is required for comprehensive coverage.
- Digests are summaries and may not capture all nuances of the original papers.