Awesome-LLM-Prompt-Optimization  by jxzhangjhu

Curated list of LLM prompt optimization papers

created 1 year ago
363 stars

Top 78.7% on sourcepulse

GitHubView on GitHub
Project Summary

This repository is a curated list of advanced papers on Large Language Model (LLM) prompt optimization and tuning methods published after 2022. It serves as a valuable resource for researchers and practitioners seeking to understand and implement state-of-the-art techniques for improving LLM performance through prompt engineering.

How It Works

The repository categorizes papers into various prompt optimization approaches, including fine-tuning, reinforcement learning, gradient-free methods, in-context learning, and Bayesian optimization. It provides links to papers and, where available, associated GitHub repositories, facilitating direct access to the research and its implementation details.

Quick Start & Requirements

This is a curated list of research papers and does not involve direct code execution or installation. All requirements are specific to the individual papers linked within the repository.

Highlighted Details

  • Covers a wide spectrum of prompt optimization techniques, from evolutionary algorithms and black-box optimization to reinforcement learning and in-context learning strategies.
  • Features prominent papers like "Large Language Models as Optimizers (OPRO)," "APE: Large Language Models Are Human-Level Prompt Engineers," and "DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines."
  • Includes methods that leverage gradient-free approaches, evolutionary algorithms, and even Bayesian optimization for prompt tuning.
  • Highlights research on human preference elicitation and ensemble methods for enhanced prompt engineering.

Maintenance & Community

The repository encourages community contributions via pull requests to update paper information, indicating an active effort to maintain and expand the curated list.

Licensing & Compatibility

The repository itself is not licensed for code execution. Individual papers and their associated codebases will have their own licenses, which must be reviewed separately.

Limitations & Caveats

This repository is a static collection of links and does not provide executable code or a unified framework for prompt optimization. Users must refer to individual papers for implementation details and potential compatibility issues.

Health Check
Last commit

1 year ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
38 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.