Collection of papers on reasoning in large language models
Top 58.1% on sourcepulse
This repository serves as a curated collection of academic papers and resources focused on enhancing reasoning capabilities within Large Language Models (LLMs). It targets researchers and practitioners in NLP and AI, providing a structured overview of techniques, evaluations, and emerging trends in LLM reasoning.
How It Works
The project categorizes papers into key areas: Surveys, Techniques (Fully Supervised Finetuning, Prompting & In-Context Learning, Hybrid Methods), and Evaluation & Analysis. This structure allows users to navigate the landscape of LLM reasoning research, from foundational surveys to specific methodologies like Chain of Thought prompting and its variants, and critical assessment of model performance.
Quick Start & Requirements
This repository is a collection of research papers and does not have a direct installation or execution command. Users are expected to access and read the cited papers.
Highlighted Details
Maintenance & Community
The repository is maintained by Jie Huang (@UIUC) with acknowledgments to contributors from Google Brain. Users are encouraged to submit missing papers via issues or pull requests.
Licensing & Compatibility
The repository itself does not specify a license. The cited papers are subject to their respective publication licenses and copyright.
Limitations & Caveats
This is a curated list of papers and does not provide code implementations or direct tools for LLM reasoning. The content is research-oriented and may not reflect production-ready solutions.
1 year ago
1 day