Research paper implementation for cumulative reasoning with LLMs
Top 90.4% on sourcepulse
This repository provides the official implementation for "Cumulative Reasoning With Large Language Models," a method designed to enhance large language model performance on complex reasoning tasks, particularly mathematics. It targets researchers and practitioners seeking to improve LLM accuracy and efficiency in problem-solving, offering significant gains over existing techniques like Tree-of-Thoughts and Program-Aided Language models.
How It Works
The core innovation is "Cumulative Reasoning" (CR), a technique that iteratively builds upon previous reasoning steps by accumulating context. This approach, implemented in the CR Agent, leverages a minimalist Python string concatenation strategy without external frameworks like Langchain. It achieves state-of-the-art results on the MATH dataset, demonstrating superior performance, especially on challenging Level 5 problems, by effectively managing and expanding the reasoning context.
Quick Start & Requirements
conda create -n cr python==3.10
followed by conda activate cr
and pip install -r requirements.txt
.Highlighted Details
Maintenance & Community
The project is based on Guidance, HuggingFace, Tree of Thoughts, and ToRA. Contact information for questions is provided.
Licensing & Compatibility
The repository does not explicitly state a license. Compatibility for commercial use or closed-source linking is not specified.
Limitations & Caveats
The CR Agent v0.1 implementation is described as minimalist. Performance claims are based on specific GPT-4 versions and experimental setups, and may vary with different models or environments.
1 week ago
1 day