Curated list of LLM prompt optimization papers
Top 78.7% on sourcepulse
This repository is a curated list of advanced papers on Large Language Model (LLM) prompt optimization and tuning methods published after 2022. It serves as a valuable resource for researchers and practitioners seeking to understand and implement state-of-the-art techniques for improving LLM performance through prompt engineering.
How It Works
The repository categorizes papers into various prompt optimization approaches, including fine-tuning, reinforcement learning, gradient-free methods, in-context learning, and Bayesian optimization. It provides links to papers and, where available, associated GitHub repositories, facilitating direct access to the research and its implementation details.
Quick Start & Requirements
This is a curated list of research papers and does not involve direct code execution or installation. All requirements are specific to the individual papers linked within the repository.
Highlighted Details
Maintenance & Community
The repository encourages community contributions via pull requests to update paper information, indicating an active effort to maintain and expand the curated list.
Licensing & Compatibility
The repository itself is not licensed for code execution. Individual papers and their associated codebases will have their own licenses, which must be reviewed separately.
Limitations & Caveats
This repository is a static collection of links and does not provide executable code or a unified framework for prompt optimization. Users must refer to individual papers for implementation details and potential compatibility issues.
1 year ago
Inactive