This repository serves as a curated list of essential academic papers on Pre-trained Language Models (PLMs), targeting NLP researchers and practitioners. It aims to provide a structured overview of the field's foundational and recent advancements, including a visual representation of paper relationships and links to code and models.
How It Works
The repository organizes papers by topic, such as core PLM research, model compression, analysis, and fine-tuning techniques. It includes a survey paper and a diagram illustrating the connections between various PLM works, offering a high-level understanding of the field's evolution.
Quick Start & Requirements
No installation or execution is required. The repository is a static collection of links to papers, code, and models.
Highlighted Details
Maintenance & Community
Maintained by Xiaozhi Wang and Zhengyan Zhang. Suggestions and corrections are welcomed.
Licensing & Compatibility
The content is freely distributable and usable. Specific licenses for linked code and models would depend on their respective repositories.
Limitations & Caveats
The repository is a curated list and does not provide direct access to the papers themselves, only links. It focuses on papers up to late 2020, and newer advancements may not be included.
2 years ago
Inactive