Research paper code for efficient LLM reasoning
Top 87.4% on sourcepulse
Chain-of-Draft (CoD) is a novel prompting paradigm for Large Language Models (LLMs) that aims to improve reasoning efficiency and reduce token usage. It is designed for researchers and practitioners working with LLMs on complex reasoning tasks who seek to optimize performance and cost. CoD achieves this by mimicking human cognitive processes, generating concise intermediate thoughts instead of verbose step-by-step explanations.
How It Works
CoD prompts LLMs to produce minimalistic yet informative intermediate reasoning outputs, focusing on critical insights rather than exhaustive detail. This approach contrasts with traditional Chain-of-Thought (CoT) prompting, which emphasizes verbosity. By reducing the number of tokens generated for intermediate steps, CoD significantly lowers costs and latency while maintaining or improving accuracy on various reasoning tasks.
Quick Start & Requirements
python evaluate.py
../configs/{task}-{prompt}.yaml
. Evaluation results are stored in ./results/
.Highlighted Details
gsm8k
, date
, sports
, and coin_flip
.Maintenance & Community
The project is associated with the paper "Chain of Draft: Thinking Faster by Writing Less" by Xu, Silei et al. Further community or maintenance details are not provided in the README.
Licensing & Compatibility
The repository does not explicitly state a license. Compatibility for commercial use or closed-source linking is not specified.
Limitations & Caveats
The README does not specify any limitations, known bugs, or deprecation warnings. The project appears to be research-oriented, and its stability for production environments is not detailed.
4 months ago
1 day