Curated list of LMaaS research papers
Top 58.7% on sourcepulse
This repository curates academic papers on Language-Model-as-a-Service (LMaaS), focusing on adapting large language models (LLMs) to downstream tasks without direct access to model parameters or gradients. It serves NLP researchers and practitioners interested in efficient LLM utilization for diverse applications, offering a structured overview of techniques like prompting, in-context learning, black-box optimization, feature extraction, and data generation.
How It Works
The project categorizes papers based on their approach to adapting frozen LLMs. These include: Text Prompting (task-specific prompts without labeled samples), In-Context Learning (few-shot examples within prompts), Black-Box Optimization (tuning small parameters via output probabilities), Feature-based Learning (using LLMs as feature extractors), and Data Generation (using LLMs to create datasets for smaller models). This categorization highlights methods that are deployment-efficient and tuning-efficient, as they avoid the need to fine-tune or maintain separate LLM copies for each task.
Quick Start & Requirements
This is a curated list of papers; there are no direct installation or execution requirements. The primary interaction is through browsing the README and potentially contributing via pull requests.
Highlighted Details
Maintenance & Community
The repository is primarily maintained by Tianxiang Sun. Contributions are encouraged via pull requests, with thanks extended to specific contributors and paper recommenders.
Licensing & Compatibility
The repository itself does not specify a license. The linked papers are subject to their respective licenses and publication terms.
Limitations & Caveats
The list is a curated collection and does not provide direct access to any models or tools. The scope is limited to papers fitting the LMaaS definition, excluding works that require full model parameter access for adaptation.
1 year ago
Inactive