Research paper code for structured understanding of prompts via taxonomy
Top 78.0% on sourcepulse
This repository provides the code for "The Prompt Report," a research project aiming to establish a structured understanding of prompt engineering in Generative AI. It offers tools for automated paper review, data collection, and experiment execution, targeting researchers and developers in the GenAI space.
How It Works
The project automates a systematic review of research papers related to prompt engineering. It utilizes scripts to collect papers, deduplicate and filter them, and then run various experiments to analyze prompting techniques. The core logic resides in src/prompt_systematic_review
, with configurations managed in config_data.py
and keywords for review in keywords.py
.
Quick Start & Requirements
pip install -r requirements.txt
git lfs
..env
file with API keys. Install pytest-dotenv
for testing.datasets/PromptSystematicReview/ThePromptReport
) and move to data/
.python main.py
(downloads papers, runs review, and experiments).Highlighted Details
Maintenance & Community
No specific contributors, sponsorships, or community links (Discord/Slack) are mentioned in the README.
Licensing & Compatibility
The repository's license is not explicitly stated in the provided README text.
Limitations & Caveats
The README notes potential discrepancies in paper titles between the arXiv API and actual paper content, which might affect automated retrieval. Some experiments, like graph_internal_references
, are noted to have parallelism issues and are better run individually.
1 year ago
1 day