Survey paper for System 2 reasoning in LLMs
Top 33.2% on sourcepulse
This repository serves as a comprehensive survey and resource hub for "System 2" reasoning in Large Language Models (LLMs). It targets AI researchers and engineers seeking to understand and advance LLMs beyond fast, intuitive responses towards deliberate, step-by-step problem-solving, akin to human analytical thought. The project aims to track and categorize the latest techniques, benchmarks, and challenges in this rapidly evolving field.
How It Works
The project categorizes advancements in LLM reasoning into twelve key areas, including O1 Replication, Process Reward Models, Reinforcement Learning, MCTS/Tree Search, Self-Training, Reflection, Efficient System2, Explainability, Multimodal Agents, Benchmarks, Reasoning & Safety, and Multimodal Reasoning Enhancement. It meticulously lists and links to relevant research papers, code repositories, and blog posts for each category, providing a structured overview of the research landscape.
Quick Start & Requirements
This repository is a curated collection of research papers and code links, not a runnable software package. No installation or specific requirements are needed to browse its contents.
Highlighted Details
Maintenance & Community
The repository is maintained by zzli2022 and collaborators, with a recent update in February 2025 releasing a survey paper. The project encourages community contributions via pull requests.
Licensing & Compatibility
The repository itself is not software and thus not licensed. Individual linked papers and code repositories will have their own licenses.
Limitations & Caveats
As a survey and resource aggregator, this repository does not provide a unified framework or executable code. Users must individually explore and integrate the linked resources. The rapid pace of research means the content may require frequent updates to remain fully current.
1 month ago
1 week