PaCoRe  by stepfun-ai

Parallel Coordinated Reasoning scales test-time compute

Created 2 months ago
322 stars

Top 84.7% on SourcePulse

GitHubView on GitHub
Project Summary

PaCoRe introduces a novel framework for massively scaling test-time compute (TTC) in Large Language Models, addressing limitations imposed by fixed context windows. Targeting researchers and engineers, it enables LLMs to tackle complex reasoning tasks by shifting inference from sequential depth to coordinated parallel breadth, yielding significant performance improvements, particularly in mathematics.

How It Works

PaCoRe operates by launching numerous parallel exploration trajectories simultaneously. These parallel "thoughts" are then compacted into concise messages via a message-passing architecture. In subsequent rounds, these messages are synthesized to guide further exploration, effectively coordinating parallel reasoning. Trained using large-scale, outcome-based reinforcement learning, this approach breaks context barriers, allowing reasoning to scale freely and delivering higher returns than extending single inference chains.

Quick Start & Requirements

Installation is straightforward via pip install -e .. The project assumes model serving using vllm and provides example inference scripts. Key resources, including the PaCoRe-8B model checkpoints and the PaCoRe-Train-8k dataset, are available on Hugging Face. Further details and the research paper can be found via provided links.

Highlighted Details

  • Achieves state-of-the-art results in mathematics reasoning, with PaCoRe-8B reaching 94.5% on HMMT 2025, surpassing GPT-5.
  • Effectively scales performance with increasing test-time compute, unlike models that plateau at context limits.
  • The PaCoRe training corpus significantly boosts performance, even for baseline models.
  • Enables multi-million-token effective TTC by coordinating parallel explorations across multiple rounds.

Maintenance & Community

Developed by StepFun and Tsinghua University, the project acknowledges numerous contributors. Future work includes scaling to stronger foundation models, enhancing token intelligence density, exploring emergent multi-agent intelligence, and improving synthetic data generation. Recruitment for roles focused on scaling reasoners towards AGI is ongoing.

Licensing & Compatibility

The project's README does not specify a software license. This omission requires clarification regarding usage rights, particularly for commercial applications or integration into closed-source systems.

Limitations & Caveats

The framework's primary demonstrated strength lies in complex reasoning tasks, especially mathematics. Achieving its full potential for multi-million-token effective TTC necessitates substantial computational resources for parallel exploration. The absence of a stated license is a critical adoption blocker.

Health Check
Last Commit

3 weeks ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
31 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.