Paper list for fairness in NLP and multimodal models
Top 68.1% on sourcepulse
This repository is a curated list of academic papers focusing on fairness, accountability, transparency, and ethics in Natural Language Processing (NLP) and multi-modal models. It serves as a comprehensive reference for researchers and practitioners interested in understanding, detecting, and mitigating biases within AI systems.
How It Works
The project organizes papers into thematic categories such as Surveys, Social Impact of Biases, Data/Models/Metrics, Bias Amplification, Detection, Mitigation, and specific NLP tasks like Generation and Machine Translation. It also includes sections for multi-modal settings, tutorials, and relevant conferences, providing a structured overview of the research landscape.
Quick Start & Requirements
This is a curated list of papers, not a software package. No installation or execution is required.
Highlighted Details
Maintenance & Community
The repository is maintained by Christina Chance, Yixin Wan, Jieyu Zhao, Emily Sheng, Sunipa Dev, Yu (Hope) Hou, Nanyun (Violet) Peng, and Kai-Wei Chang. Contributions are welcomed via pull requests or email.
Licensing & Compatibility
The content is presented for informational and research purposes. Specific paper licenses would apply to the individual works listed.
Limitations & Caveats
The authors acknowledge that the list may not be exhaustive and encourage community contributions to improve its completeness.
1 year ago
Inactive