quality-prompts  by sarthakrastogi

Python library for prompt engineering research

created 1 year ago
725 stars

Top 48.6% on sourcepulse

GitHubView on GitHub
Project Summary

This library provides a framework for quickly implementing and evaluating 58 different prompting techniques for large language models. It is designed for researchers and developers working with LLMs who need to systematically test and compare various prompting strategies to improve model performance.

How It Works

The library allows users to construct prompts by defining directives, additional information, and output formatting. It then enables the selection and application of specific prompting techniques, such as "System2Attention" for context clarification or "Tabular Chain of Thought" for step-by-step reasoning in math problems. A key feature is its ability to search and utilize only relevant few-shot examples for a given query, optimizing the prompt's effectiveness.

Quick Start & Requirements

  • Install: pip install quality-prompts
  • Requirements: Python. No specific version or hardware requirements are mentioned.

Highlighted Details

  • Implements 58 prompting techniques from a University of Maryland survey.
  • Features "System2Attention" for context clarification.
  • Includes "Tabular Chain of Thought" for improved accuracy in math problems.
  • Supports few-shot example selection based on query relevance.

Maintenance & Community

No information on contributors, community channels, or roadmap is provided in the README.

Licensing & Compatibility

The license is not specified in the README.

Limitations & Caveats

The README does not detail specific limitations, known bugs, or compatibility issues. The project appears to be in active development with upcoming features for evaluating techniques.

Health Check
Last commit

1 year ago

Responsiveness

1 week

Pull Requests (30d)
0
Issues (30d)
0
Star History
4 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.