Automatic prompt engineer for LLM instruction generation
Top 31.6% on sourcepulse
This repository provides the Automatic Prompt Engineer (APE) framework, designed to automate the creation and selection of effective prompts for Large Language Models (LLMs). It targets researchers and practitioners seeking to improve LLM performance across various NLP tasks by replacing manual prompt engineering with an LLM-driven, search-based approach. APE aims to achieve human-level or superior prompt quality with reduced human effort.
How It Works
APE treats prompt generation as a program synthesis problem. It uses an LLM to generate candidate prompts based on a specified template and a set of demonstrations. These candidate prompts are then evaluated by another LLM on a given dataset, using a defined evaluation template. The framework employs search strategies, including Upper Confidence Bound (UCB) for efficiency, to identify the best-performing prompts that maximize a scoring function, thereby optimizing LLM task performance.
Quick Start & Requirements
pip install -e .
export OPENAI_API_KEY=YOUR_KEY
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
APE can be computationally expensive to run, with cost estimation tools available. The framework relies on OpenAI's API, and specific model versions used in experiments (e.g., text-davinci-002
) may influence results.
1 year ago
1+ week