Research paper enhancing language models with task-agnostic scaffolding
Top 74.1% on sourcepulse
Meta-Prompting introduces a task-agnostic scaffolding technique to enhance Large Language Model (LLM) performance by orchestrating multiple instances of the same LLM as specialized "experts." This approach aims to improve accuracy and robustness across diverse tasks, simplifying user interaction by eliminating the need for task-specific instructions.
How It Works
Meta-prompting transforms a single LLM into a conductor that deconstructs complex tasks into subtasks. Each subtask is then assigned to a distinct "expert" instance of the LLM, guided by tailored instructions. The conductor LLM manages communication, integrates expert outputs, and performs verification, effectively acting as both an orchestrator and a panel of experts. This method is designed to be zero-shot and task-agnostic, simplifying usage.
Quick Start & Requirements
pip install -r requirements.txt
export OPENAI_API_KEY="YOUR_API_KEY"
).python run_experiments.py --task_name "GameOf24" --meta_config_path "prompts/meta-v0-2023-08-14-baseline.json" --model_name "gpt-3.5-turbo"
python evaluate_outputs.py --directory "outputs/*/" --task "GameOf24"
/data
and /prompts
directories, respectively.utils/meta_scaffolding.py
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
1 year ago
1 week