Framework for prompt tuning using intent-based calibration
Top 17.9% on sourcepulse
AutoPrompt is a framework for optimizing prompts for large language models (LLMs), targeting prompt engineers and researchers seeking to improve LLM performance and reduce prompt sensitivity. It automates the prompt engineering process by iteratively refining prompts using synthetic data and user feedback, leading to more robust and accurate LLM outputs with minimal manual effort.
How It Works
The framework employs an "Intent-based Prompt Calibration" method. It starts with an initial prompt and task description, then iteratively generates diverse, challenging samples. These samples are annotated (either by humans via tools like Argilla or by an LLM), and prompt performance is evaluated. An LLM then suggests prompt improvements. This joint synthetic data generation and prompt optimization approach aims to outperform traditional methods by creating boundary cases and refining prompts efficiently.
Quick Start & Requirements
pip install -r requirements.txt
or use environment_dev.yml
with Conda.config/llm_env.yml
, define budget in config/config_default.yml
.python run_pipeline.py
for classification or python run_generation_pipeline.py
for generation.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The framework does not guarantee absolute correctness or unbiased results. Users are responsible for monitoring and managing LLM API token usage and associated costs. Prompt accuracy may fluctuate during optimization.
3 months ago
1 day