Discover and explore top open-source AI tools and projects—updated daily.
Weixin-LiangResearch paper assessing LLM feedback on scientific papers
Top 60.0% on SourcePulse
This repository provides the Python code for an empirical analysis investigating the utility of Large Language Models (LLMs) in providing feedback on research papers. It targets researchers seeking to understand LLM capabilities in scientific review, offering insights into how LLM feedback compares to human feedback and user perceptions.
How It Works
The project utilizes an automated pipeline powered by GPT-4 to generate comments on full research paper PDFs. It evaluates feedback quality through two large-scale studies: quantitative comparison with human peer reviewer feedback across 15 Nature family journals and the ICLR conference, and a prospective user study with 308 researchers. The approach aims to quantify the overlap between LLM and human feedback points and gauge user satisfaction.
Quick Start & Requirements
python -m sciencebeam_parser.service.server --port=8080 (Requires conda activate ScienceBeam after conda env create -f conda_environment.yml).conda create -n llm python=3.10, conda activate llm, pip install -r requirements.txt, echo "YOUR_OPENAI_API_KEY" > key.txt, then run python main.py or python main_from_text.py.Highlighted Details
Maintenance & Community
The project is associated with authors from multiple institutions, including the University of Michigan. Further community or maintenance details are not explicitly provided in the README.
Licensing & Compatibility
The repository's license is not explicitly stated in the README. Code is provided for research purposes.
Limitations & Caveats
The ScienceBeam PDF parser is explicitly stated to support only x86 Linux operating systems. The LLM feedback generation may focus on certain aspects and struggle with in-depth methodological critique.
1 year ago
Inactive
huggingface