Paper list for prompt-based tuning of pre-trained language models
Top 11.7% on sourcepulse
This repository serves as a curated list of essential academic papers on prompt-based tuning for pre-trained language models. It aims to provide researchers and practitioners with a structured overview of the field, covering foundational concepts, advancements, and applications.
How It Works
The project categorizes papers into logical sections such as "Pilot Work," "Basics," "Analysis," "Improvements," and "Specializations." This organization allows users to navigate the evolution and diverse aspects of prompt learning, from initial explorations to sophisticated techniques and task-specific adaptations.
Quick Start & Requirements
This repository is a static list of papers and does not require installation or execution. It serves as a reference guide.
Highlighted Details
Maintenance & Community
The paper list is primarily maintained by Ning Ding and Shengding Hu. The project encourages community contributions via pull requests to update paper information.
Licensing & Compatibility
The repository itself does not have a specified license, but it references academic papers, each with its own licensing and distribution terms. Compatibility for commercial use depends on the licenses of the individual papers linked.
Limitations & Caveats
This is a curated list and does not provide code or implementations for the papers discussed. The scope is limited to prompt-based tuning, excluding other parameter-efficient fine-tuning methods unless directly related.
2 years ago
Inactive