AutoPrompt  by Eladlev

Framework for prompt tuning using intent-based calibration

Created 1 year ago
2,778 stars

Top 17.2% on SourcePulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

AutoPrompt is a framework for optimizing prompts for large language models (LLMs), targeting prompt engineers and researchers seeking to improve LLM performance and reduce prompt sensitivity. It automates the prompt engineering process by iteratively refining prompts using synthetic data and user feedback, leading to more robust and accurate LLM outputs with minimal manual effort.

How It Works

The framework employs an "Intent-based Prompt Calibration" method. It starts with an initial prompt and task description, then iteratively generates diverse, challenging samples. These samples are annotated (either by humans via tools like Argilla or by an LLM), and prompt performance is evaluated. An LLM then suggests prompt improvements. This joint synthetic data generation and prompt optimization approach aims to outperform traditional methods by creating boundary cases and refining prompts efficiently.

Quick Start & Requirements

  • Install: pip install -r requirements.txt or use environment_dev.yml with Conda.
  • Prerequisites: Python <= 3.10, OpenAI API key (GPT-4 recommended), Argilla V1 (optional, for human-in-the-loop annotation).
  • Configuration: Set API keys in config/llm_env.yml, define budget in config/config_default.yml.
  • Run: python run_pipeline.py for classification or python run_generation_pipeline.py for generation.
  • Docs: Documentation
  • Examples: Prompt optimization examples

Highlighted Details

  • Enhances prompt quality with minimal data and annotation.
  • Designed for production use cases like moderation and content generation.
  • Supports prompt migration across models/providers and prompt squeezing.
  • Optimization typically takes minutes and costs under $1 with GPT-4 Turbo.

Maintenance & Community

Licensing & Compatibility

  • Licensed under Apache License, Version 2.0.
  • Compatible with commercial use.

Limitations & Caveats

The framework does not guarantee absolute correctness or unbiased results. Users are responsible for monitoring and managing LLM API token usage and associated costs. Prompt accuracy may fluctuate during optimization.

Health Check
Last Commit

5 months ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
1
Star History
40 stars in the last 30 days

Explore Similar Projects

Starred by Eric Zhu Eric Zhu(Coauthor of AutoGen; Research Scientist at Microsoft Research) and Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems").

PromptWizard by microsoft

0.4%
4k
Agent-driven framework for task-aware prompt optimization
Created 1 year ago
Updated 1 month ago
Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Wing Lian Wing Lian(Founder of Axolotl AI), and
2 more.

YiVal by YiVal

0.1%
2k
Prompt engineering assistant for GenAI apps
Created 2 years ago
Updated 1 year ago
Feedback? Help us improve.